datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.64M
| likes
int64 0
6.41k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 19
1M
|
---|---|---|---|---|---|---|---|---|
Shashank-V-H/finetuning_demo | Shashank-V-H | "2024-12-01T16:56:00Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T16:55:58Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 20985
num_examples: 17
download_size: 12727
dataset_size: 20985
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
davidstroudLLJD/bbc | davidstroudLLJD | "2024-12-01T17:11:43Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T17:05:57Z" | ---
dataset_info:
features:
- name: title
dtype: string
- name: link
dtype: string
- name: pub_date
dtype: string
- name: description
dtype: string
splits:
- name: train
num_bytes: 7763
num_examples: 31
download_size: 8081
dataset_size: 7763
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
richmondsin/arc_it_results | richmondsin | "2024-12-01T17:07:44Z" | 4 | 0 | [
"region:us"
] | null | "2024-12-01T17:07:35Z" | ---
pretty_name: Evaluation run of google/gemma-2-2b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)\nThe dataset is\
\ composed of 0 configuration(s), each one corresponding to one of the evaluated\
\ task.\n\nThe dataset has been created from 2 run(s). Each run can be found as\
\ a specific split in each configuration, the split being named using the timestamp\
\ of the run.The \"train\" split is always pointing to the latest results.\n\nAn\
\ additional configuration \"results\" store all the aggregated results of the run.\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\n\t\"richmondsin/arc_it_results\"\
,\n\tname=\"google__gemma-2-2b__arc_it\",\n\tsplit=\"latest\"\n)\n```\n\n## Latest\
\ results\n\nThese are the [latest results from run 2024-12-01T12-07-35.117919](https://huggingface.co/datasets/richmondsin/arc_it_results/blob/main/google/gemma-2-2b/results_2024-12-01T12-07-35.117919.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"arc_it\": {\n \
\ \"alias\": \"arc_it\",\n \"acc,none\": 0.3888888888888889,\n\
\ \"acc_stderr,none\": 0.014599413987491596,\n \"acc_norm,none\"\
: 0.4390681003584229,\n \"acc_norm_stderr,none\": 0.014862216324833933\n\
\ }\n },\n \"arc_it\": {\n \"alias\": \"arc_it\",\n \"\
acc,none\": 0.3888888888888889,\n \"acc_stderr,none\": 0.014599413987491596,\n\
\ \"acc_norm,none\": 0.4390681003584229,\n \"acc_norm_stderr,none\"\
: 0.014862216324833933\n }\n}\n```"
repo_url: https://huggingface.co/google/gemma-2-2b
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: google__gemma-2-2b__arc_it
data_files:
- split: 2024_12_01T12_07_35.117919
path:
- '**/samples_arc_it_2024-12-01T12-07-35.117919.jsonl'
- split: latest
path:
- '**/samples_arc_it_2024-12-01T12-07-35.117919.jsonl'
---
# Dataset Card for Evaluation run of google/gemma-2-2b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)
The dataset is composed of 0 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"richmondsin/arc_it_results",
name="google__gemma-2-2b__arc_it",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-01T12-07-35.117919](https://huggingface.co/datasets/richmondsin/arc_it_results/blob/main/google/gemma-2-2b/results_2024-12-01T12-07-35.117919.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"arc_it": {
"alias": "arc_it",
"acc,none": 0.3888888888888889,
"acc_stderr,none": 0.014599413987491596,
"acc_norm,none": 0.4390681003584229,
"acc_norm_stderr,none": 0.014862216324833933
}
},
"arc_it": {
"alias": "arc_it",
"acc,none": 0.3888888888888889,
"acc_stderr,none": 0.014599413987491596,
"acc_norm,none": 0.4390681003584229,
"acc_norm_stderr,none": 0.014862216324833933
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ADHIZ/asxascxsasss | ADHIZ | "2024-12-01T17:11:19Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T17:11:17Z" | ---
dataset_info:
features:
- name: code_language
dtype: string
- name: code
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 202
num_examples: 2
download_size: 1847
dataset_size: 202
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
richmondsin/arc_id_results | richmondsin | "2024-12-01T17:48:57Z" | 4 | 0 | [
"region:us"
] | null | "2024-12-01T17:48:48Z" | ---
pretty_name: Evaluation run of google/gemma-2-2b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)\nThe dataset is\
\ composed of 0 configuration(s), each one corresponding to one of the evaluated\
\ task.\n\nThe dataset has been created from 2 run(s). Each run can be found as\
\ a specific split in each configuration, the split being named using the timestamp\
\ of the run.The \"train\" split is always pointing to the latest results.\n\nAn\
\ additional configuration \"results\" store all the aggregated results of the run.\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\n\t\"richmondsin/arc_id_results\"\
,\n\tname=\"google__gemma-2-2b__arc_id\",\n\tsplit=\"latest\"\n)\n```\n\n## Latest\
\ results\n\nThese are the [latest results from run 2024-12-01T12-48-48.275872](https://huggingface.co/datasets/richmondsin/arc_id_results/blob/main/google/gemma-2-2b/results_2024-12-01T12-48-48.275872.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"arc_id\": {\n \
\ \"alias\": \"arc_id\",\n \"acc,none\": 0.36379928315412186,\n\
\ \"acc_stderr,none\": 0.014407564179556647,\n \"acc_norm,none\"\
: 0.4014336917562724,\n \"acc_norm_stderr,none\": 0.014679984936613356\n\
\ }\n },\n \"arc_id\": {\n \"alias\": \"arc_id\",\n \"\
acc,none\": 0.36379928315412186,\n \"acc_stderr,none\": 0.014407564179556647,\n\
\ \"acc_norm,none\": 0.4014336917562724,\n \"acc_norm_stderr,none\"\
: 0.014679984936613356\n }\n}\n```"
repo_url: https://huggingface.co/google/gemma-2-2b
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: google__gemma-2-2b__arc_id
data_files:
- split: 2024_12_01T12_48_48.275872
path:
- '**/samples_arc_id_2024-12-01T12-48-48.275872.jsonl'
- split: latest
path:
- '**/samples_arc_id_2024-12-01T12-48-48.275872.jsonl'
---
# Dataset Card for Evaluation run of google/gemma-2-2b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)
The dataset is composed of 0 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"richmondsin/arc_id_results",
name="google__gemma-2-2b__arc_id",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-01T12-48-48.275872](https://huggingface.co/datasets/richmondsin/arc_id_results/blob/main/google/gemma-2-2b/results_2024-12-01T12-48-48.275872.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"arc_id": {
"alias": "arc_id",
"acc,none": 0.36379928315412186,
"acc_stderr,none": 0.014407564179556647,
"acc_norm,none": 0.4014336917562724,
"acc_norm_stderr,none": 0.014679984936613356
}
},
"arc_id": {
"alias": "arc_id",
"acc,none": 0.36379928315412186,
"acc_stderr,none": 0.014407564179556647,
"acc_norm,none": 0.4014336917562724,
"acc_norm_stderr,none": 0.014679984936613356
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
k4d3/bogexboog | k4d3 | "2024-12-02T11:12:47Z" | 4 | 1 | [
"license:wtfpl",
"region:us"
] | null | "2024-12-01T18:04:06Z" | ---
license: wtfpl
---
|
udamaurizio/parler_tts_mini_V01_TestVoice_Italian_tagged | udamaurizio | "2024-12-01T18:27:36Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T18:27:34Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: transcription_normalised
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: float64
- name: phonemes
dtype: string
splits:
- name: train
num_bytes: 1015
num_examples: 5
download_size: 5336
dataset_size: 1015
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
onekat/lit-dataset | onekat | "2024-12-01T18:51:23Z" | 4 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T18:51:21Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: sentido
dtype: string
splits:
- name: train
num_bytes: 308714
num_examples: 1124
download_size: 190133
dataset_size: 308714
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sdiazlor/my-distiset-c631d9f8 | sdiazlor | "2024-12-01T18:52:44Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-01T18:52:41Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 328
num_examples: 1
download_size: 3496
dataset_size: 328
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-c631d9f8
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-c631d9f8/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-c631d9f8/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 2,
"text": "The recent update to the restaurant\u0027s menu has been a game-changer, offering a more diverse range of vegan options that are both delicious and reasonably priced. The new seasonal menu items are a perfect addition to their already impressive selection, and the friendly staff are always happy to make recommendations."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-c631d9f8", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-c631d9f8")
```
</details>
|
7wolf/translation-300k | 7wolf | "2024-12-01T19:18:48Z" | 4 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T19:18:40Z" | ---
dataset_info:
features:
- name: dst
dtype: string
- name: src
dtype: string
splits:
- name: train
num_bytes: 41095474
num_examples: 300000
- name: validation
num_bytes: 255814
num_examples: 500
- name: test
num_bytes: 365477
num_examples: 1000
download_size: 22601564
dataset_size: 41716765
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
sdiazlor/my-distiset-1addf00d | sdiazlor | "2024-12-01T19:30:02Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-01T19:29:59Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 444
num_examples: 1
download_size: 4305
dataset_size: 444
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-1addf00d
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-1addf00d/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-1addf00d/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 1,
"text": "The recent advancements in Quantum Field Theory have led to a paradigm shift in our understanding of spacetime\u0027s role in high-energy particle interactions. However, I am still unconvinced by the theory\u0027s ability to fully account for the observed phenomena at the Planck scale. A more comprehensive analysis of the implications on gravitational waves and their interactions with matter would be required to fully endorse this theory."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-1addf00d", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-1addf00d")
```
</details>
|
richmondsin/arc_mr_results | richmondsin | "2024-12-01T20:10:15Z" | 4 | 0 | [
"region:us"
] | null | "2024-12-01T20:09:43Z" | ---
pretty_name: Evaluation run of google/gemma-2-2b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)\nThe dataset is\
\ composed of 0 configuration(s), each one corresponding to one of the evaluated\
\ task.\n\nThe dataset has been created from 2 run(s). Each run can be found as\
\ a specific split in each configuration, the split being named using the timestamp\
\ of the run.The \"train\" split is always pointing to the latest results.\n\nAn\
\ additional configuration \"results\" store all the aggregated results of the run.\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\n\t\"richmondsin/arc_mr_results\"\
,\n\tname=\"google__gemma-2-2b__arc_mr\",\n\tsplit=\"latest\"\n)\n```\n\n## Latest\
\ results\n\nThese are the [latest results from run 2024-12-01T15-09-43.272319](https://huggingface.co/datasets/richmondsin/arc_mr_results/blob/main/google/gemma-2-2b/results_2024-12-01T15-09-43.272319.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"arc_mr\": {\n \
\ \"alias\": \"arc_mr\",\n \"acc,none\": 0.25089605734767023,\n\
\ \"acc_stderr,none\": 0.012983163493905296,\n \"acc_norm,none\"\
: 0.2616487455197133,\n \"acc_norm_stderr,none\": 0.01316295520295665\n\
\ }\n },\n \"arc_mr\": {\n \"alias\": \"arc_mr\",\n \"\
acc,none\": 0.25089605734767023,\n \"acc_stderr,none\": 0.012983163493905296,\n\
\ \"acc_norm,none\": 0.2616487455197133,\n \"acc_norm_stderr,none\"\
: 0.01316295520295665\n }\n}\n```"
repo_url: https://huggingface.co/google/gemma-2-2b
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: google__gemma-2-2b__arc_mr
data_files:
- split: 2024_12_01T15_09_43.272319
path:
- '**/samples_arc_mr_2024-12-01T15-09-43.272319.jsonl'
- split: latest
path:
- '**/samples_arc_mr_2024-12-01T15-09-43.272319.jsonl'
---
# Dataset Card for Evaluation run of google/gemma-2-2b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)
The dataset is composed of 0 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"richmondsin/arc_mr_results",
name="google__gemma-2-2b__arc_mr",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-01T15-09-43.272319](https://huggingface.co/datasets/richmondsin/arc_mr_results/blob/main/google/gemma-2-2b/results_2024-12-01T15-09-43.272319.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"arc_mr": {
"alias": "arc_mr",
"acc,none": 0.25089605734767023,
"acc_stderr,none": 0.012983163493905296,
"acc_norm,none": 0.2616487455197133,
"acc_norm_stderr,none": 0.01316295520295665
}
},
"arc_mr": {
"alias": "arc_mr",
"acc,none": 0.25089605734767023,
"acc_stderr,none": 0.012983163493905296,
"acc_norm,none": 0.2616487455197133,
"acc_norm_stderr,none": 0.01316295520295665
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
sdiazlor/my-distiset-8e6109c6 | sdiazlor | "2024-12-01T20:12:19Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-01T20:12:07Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 2958
num_examples: 10
download_size: 5305
dataset_size: 2958
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-8e6109c6
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-8e6109c6/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-8e6109c6/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 1,
"text": "I recently stayed at the hotel and had an amazing experience. The staff were friendly and helpful, the room was clean and comfortable, and the location was perfect for exploring the city. I would definitely recommend this hotel to anyone traveling to the area."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-8e6109c6", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-8e6109c6")
```
</details>
|
sdiazlor/my-distiset-20721097 | sdiazlor | "2024-12-01T20:27:53Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-01T20:27:49Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': neutral
'1': negative
'2': positive
splits:
- name: train
num_bytes: 3690
num_examples: 10
download_size: 6811
dataset_size: 3690
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-20721097
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-20721097/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-20721097/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 2,
"text": "I was pleasantly surprised by the quality of the new smartphone, considering its affordable price. The battery life is excellent and the camera is decent. However, the screen resolution could be better. Overall, I\u0027m satisfied with my purchase."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-20721097", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-20721097")
```
</details>
|
Erland/NLP701_Assignment2_Subtask3_KTO_Dataset | Erland | "2024-12-01T20:59:32Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T20:29:44Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: label
dtype: bool
- name: bertscore_f1
dtype: float64
- name: rank
dtype: int64
- name: file_name
dtype: string
- name: categories
dtype: string
- name: subcategories
dtype: string
- name: reference_explanation
dtype: string
splits:
- name: train
num_bytes: 3270050
num_examples: 440
download_size: 585911
dataset_size: 3270050
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sdiazlor/my-distiset-3584bf86 | sdiazlor | "2024-12-01T20:34:45Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-01T20:34:43Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
'2': neutral
splits:
- name: train
num_bytes: 3223
num_examples: 10
download_size: 5261
dataset_size: 3223
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-3584bf86
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-3584bf86/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-3584bf86/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 0,
"text": "I recently purchased this coffee maker and I\u0027m extremely happy with its performance. It\u0027s easy to use, looks great on my countertop, and the coffee it makes is delicious and rich. The price was a bit steep, but it was worth it for the quality and convenience it offers."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-3584bf86", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-3584bf86")
```
</details>
|
sdiazlor/my-distiset-b884cfce | sdiazlor | "2024-12-01T20:43:02Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-01T20:42:59Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': unrelated
'1': negative
'2': shipping
'3': pricing
'4': neutral
'5': mixed
'6': positive
'7': product-quality
'8': product-description
splits:
- name: train
num_bytes: 3688
num_examples: 10
download_size: 7365
dataset_size: 3688
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-b884cfce
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-b884cfce/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-b884cfce/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 5,
"text": "The product\u0027s intricate design, which incorporates postmodern deconstructionist principles, adds a level of sophistication to my home office decor. However, I must admit that the customer service team\u0027s responses to my inquiries were somewhat delayed. The price of the item was higher than I anticipated, but I suppose that\u0027s what I get for purchasing a premium product."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-b884cfce", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-b884cfce")
```
</details>
|
GautamPrakash2002/finetuning_demo | GautamPrakash2002 | "2024-12-01T20:44:47Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T20:44:44Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 69677
num_examples: 100
download_size: 28045
dataset_size: 69677
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sdiazlor/my-distiset-fcd0fe26 | sdiazlor | "2024-12-01T20:45:41Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-01T20:45:38Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
'2': neutral
'3': mixed
splits:
- name: train
num_bytes: 3680
num_examples: 10
download_size: 5122
dataset_size: 3680
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-fcd0fe26
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-fcd0fe26/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-fcd0fe26/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 1,
"text": "I recently visited the new eco-friendly caf\u00e9 in town and was thoroughly unimpressed. The environmental claims they made on their website seemed too good to be true, and the\u0027sustainable\u0027 materials they used for their cups and utensils looked cheap and flimsy to me. I ordered a coffee and a sandwich, but unfortunately, the food was overpriced and tasted bland. The staff was friendly and attentive, but the overall experience was a letdown. I wouldn\u0027t recommend this place to my friends and family."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-fcd0fe26", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-fcd0fe26")
```
</details>
|
richmondsin/arc_hi_results | richmondsin | "2024-12-01T21:17:41Z" | 4 | 0 | [
"region:us"
] | null | "2024-12-01T21:17:29Z" | ---
pretty_name: Evaluation run of google/gemma-2-2b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)\nThe dataset is\
\ composed of 0 configuration(s), each one corresponding to one of the evaluated\
\ task.\n\nThe dataset has been created from 2 run(s). Each run can be found as\
\ a specific split in each configuration, the split being named using the timestamp\
\ of the run.The \"train\" split is always pointing to the latest results.\n\nAn\
\ additional configuration \"results\" store all the aggregated results of the run.\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\n\t\"richmondsin/arc_hi_results\"\
,\n\tname=\"google__gemma-2-2b__arc_hi\",\n\tsplit=\"latest\"\n)\n```\n\n## Latest\
\ results\n\nThese are the [latest results from run 2024-12-01T16-17-29.326907](https://huggingface.co/datasets/richmondsin/arc_hi_results/blob/main/google/gemma-2-2b/results_2024-12-01T16-17-29.326907.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"arc_hi\": {\n \
\ \"alias\": \"arc_hi\",\n \"acc,none\": 0.27419354838709675,\n\
\ \"acc_stderr,none\": 0.013359850379455064,\n \"acc_norm,none\"\
: 0.3046594982078853,\n \"acc_norm_stderr,none\": 0.013783791363713757\n\
\ }\n },\n \"arc_hi\": {\n \"alias\": \"arc_hi\",\n \"\
acc,none\": 0.27419354838709675,\n \"acc_stderr,none\": 0.013359850379455064,\n\
\ \"acc_norm,none\": 0.3046594982078853,\n \"acc_norm_stderr,none\"\
: 0.013783791363713757\n }\n}\n```"
repo_url: https://huggingface.co/google/gemma-2-2b
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: google__gemma-2-2b__arc_hi
data_files:
- split: 2024_12_01T16_17_29.326907
path:
- '**/samples_arc_hi_2024-12-01T16-17-29.326907.jsonl'
- split: latest
path:
- '**/samples_arc_hi_2024-12-01T16-17-29.326907.jsonl'
---
# Dataset Card for Evaluation run of google/gemma-2-2b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)
The dataset is composed of 0 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"richmondsin/arc_hi_results",
name="google__gemma-2-2b__arc_hi",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-01T16-17-29.326907](https://huggingface.co/datasets/richmondsin/arc_hi_results/blob/main/google/gemma-2-2b/results_2024-12-01T16-17-29.326907.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"arc_hi": {
"alias": "arc_hi",
"acc,none": 0.27419354838709675,
"acc_stderr,none": 0.013359850379455064,
"acc_norm,none": 0.3046594982078853,
"acc_norm_stderr,none": 0.013783791363713757
}
},
"arc_hi": {
"alias": "arc_hi",
"acc,none": 0.27419354838709675,
"acc_stderr,none": 0.013359850379455064,
"acc_norm,none": 0.3046594982078853,
"acc_norm_stderr,none": 0.013783791363713757
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
khairi/pubmed-text-10 | khairi | "2024-12-01T22:07:52Z" | 4 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T21:24:25Z" | ---
dataset_info:
features:
- name: pubMedId
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 2490717067
num_examples: 2434765
- name: test
num_bytes: 1043267
num_examples: 1000
- name: valid
num_bytes: 511516
num_examples: 499
download_size: 1442909935
dataset_size: 2492271850
---
# Dataset Card for "pubmed-text-10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dgambettaphd/D_gen0_run1_llama2-7b_wiki_doc1000_real64_synt64 | dgambettaphd | "2024-12-01T21:37:19Z" | 4 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T21:37:14Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 580202
num_examples: 1000
download_size: 362096
dataset_size: 580202
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Erland/NLP701_Assignment2_Subtask3_KTO_Dataset_3 | Erland | "2024-12-01T21:52:30Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T21:52:27Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: label
dtype: bool
- name: bertscore_f1
dtype: float64
- name: rank
dtype: int64
- name: file_name
dtype: string
- name: categories
dtype: string
- name: subcategories
dtype: string
- name: reference_explanation
dtype: string
splits:
- name: train
num_bytes: 1770470
num_examples: 440
download_size: 265815
dataset_size: 1770470
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sdiazlor/my-distiset-b695a775 | sdiazlor | "2024-12-01T22:13:46Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-01T22:13:43Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': customer-service
'1': price
'2': shipping
'3': product-quality
splits:
- name: train
num_bytes: 2986
num_examples: 10
download_size: 5662
dataset_size: 2986
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-b695a775
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-b695a775/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-b695a775/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 3,
"text": "I\u0027ve had this product for a week now and it\u0027s been working flawlessly, the battery life is quite impressive, and the picture quality is really good. I\u0027m very satisfied with my purchase."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-b695a775", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-b695a775")
```
</details>
|
khairi/pubmed-text-02 | khairi | "2024-12-01T22:57:33Z" | 4 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T22:13:44Z" | ---
dataset_info:
features:
- name: pubMedId
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 2503107943
num_examples: 2424548
- name: test
num_bytes: 1023191
num_examples: 1016
- name: valid
num_bytes: 534524
num_examples: 502
download_size: 1450906824
dataset_size: 2504665658
---
# Dataset Card for "pubmed-text-02"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ericflo/Llama-3.2-3B-COT | ericflo | "2024-12-02T01:35:26Z" | 4 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T22:39:08Z" | ---
license: apache-2.0
---
|
dgambettaphd/D_gen1_run1_llama2-7b_wiki_doc1000_real64_synt64 | dgambettaphd | "2024-12-01T22:44:14Z" | 4 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T22:44:09Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 582779
num_examples: 1000
download_size: 355596
dataset_size: 582779
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fknguedia/SQL-GENERATOR-DATASETS | fknguedia | "2024-12-02T09:10:35Z" | 4 | 0 | [
"license:other",
"region:us"
] | null | "2024-12-01T22:55:29Z" | ---
license: other
license_name: ece-mscde-fkn
license_link: LICENSE
viewer: true
---
## view code : https://colab.research.google.com/drive/1rLk-mdsWsdxwQdYYJS24rAP9KABtbiqu?usp=sharing
## Example :
## {"messages": [
## {"role": "system", "content": "You are a SQL expert assistant. Generate clear, efficient SQL queries based on user requests. Provide only the SQL query without any additional text or explanation."}
## {"role": "user", "content": "What are the top 5 most popular genres of music in the database, based on the number of tracks in each genre?"}
## {"role": "assistant", "content": "SELECT T2.Name, COUNT(T1.TrackId) as TrackCount FROM Track T1 INNER JOIN Genre T2 ON T1.GenreId = T2.GenreId GROUP BY T2.Name ORDER BY TrackCount DESC LIMIT 5;"}
## ]
## }
## Notes : https://platform.openai.com/docs/guides/fine-tuning/
## I sincerely thank you for your intention to
## *@paulml*
## *@TW3Partners*
|
khairi/pubmed-text-03 | khairi | "2024-12-01T23:40:15Z" | 4 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T22:57:34Z" | ---
dataset_info:
features:
- name: pubMedId
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 2459987042
num_examples: 2373637
- name: test
num_bytes: 1001825
num_examples: 1012
- name: valid
num_bytes: 534559
num_examples: 500
download_size: 1425187695
dataset_size: 2461523426
---
# Dataset Card for "pubmed-text-03"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
urbushey/product_catalog_training_1 | urbushey | "2024-12-01T23:00:50Z" | 4 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-12-01T23:00:02Z" | ---
license: apache-2.0
---
|
sdiazlor/my-distiset-0e073ab9 | sdiazlor | "2024-12-01T23:08:01Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-01T23:07:58Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': mixed
'1': negative
'2': neutral
'3': positive
splits:
- name: train
num_bytes: 3803
num_examples: 10
download_size: 6326
dataset_size: 3803
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-0e073ab9
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-0e073ab9/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-0e073ab9/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 3,
"text": "I recently stayed at this hotel during a business trip and was pleasantly surprised by the exceptional service and cleanliness. The staff was friendly and accommodating, and the breakfast buffet was impressive. I would highly recommend this hotel to anyone looking for a comfortable stay."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-0e073ab9", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-0e073ab9")
```
</details>
|
Abdul110/distilabel-example | Abdul110 | "2024-12-01T23:24:42Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T23:24:40Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: completion
dtype: string
- name: meta
struct:
- name: category
dtype: string
- name: completion
dtype: string
- name: id
dtype: int64
- name: input
dtype: 'null'
- name: motivation_app
dtype: 'null'
- name: prompt
dtype: string
- name: source
dtype: string
- name: subcategory
dtype: string
- name: generation
dtype: 'null'
- name: distilabel_metadata
struct:
- name: raw_input_text_generation_0
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_output_text_generation_0
dtype: 'null'
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 21015
num_examples: 10
download_size: 26098
dataset_size: 21015
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
khairi/pubmed-text-04 | khairi | "2024-12-02T00:21:59Z" | 4 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T23:40:16Z" | ---
dataset_info:
features:
- name: pubMedId
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 2416644748
num_examples: 2345417
- name: test
num_bytes: 1051271
num_examples: 1027
- name: valid
num_bytes: 522261
num_examples: 500
download_size: 1400588219
dataset_size: 2418218280
---
# Dataset Card for "pubmed-text-04"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
koml/synthetic-dataset | koml | "2024-12-01T23:45:54Z" | 4 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T23:45:03Z" | ---
dataset_info:
features:
- name: index
dtype: int64
- name: image
dtype: image
- name: question_en
dtype: string
- name: question_jp
dtype: string
- name: pdf_name
dtype: string
- name: pdf_page
dtype: int64
splits:
- name: train
num_bytes: 33052524
num_examples: 79
download_size: 23111211
dataset_size: 33052524
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
--- |
sdiazlor/my-distiset-e5cd762a | sdiazlor | "2024-12-01T23:45:16Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-01T23:45:12Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: system_prompt
dtype: string
splits:
- name: train
num_bytes: 12018
num_examples: 10
download_size: 14883
dataset_size: 12018
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-e5cd762a
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-e5cd762a/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-e5cd762a/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"completion": "That\u0027s a common issue. To troubleshoot, I\u0027ll need to ask a few questions. Can you please confirm that your Wi-Fi is working properly on other devices connected to the same network? Have you tried restarting your phone and your router? Also, are you in a physical location where you don\u0027t have any physical obstructions between you and the router, like walls or furniture?",
"prompt": "I\u0027m having issues with my phone. It\u0027s turned on but just won\u0027t connect to my Wi-Fi. \n\n",
"system_prompt": "You are a phone company customer service representative with a direct and no-nonsense attitude. Your purpose is to deal with customer inquiries, resolve issues, and provide quick solutions while maintaining a firm but respectful tone. Respond to customer calls, emails, and chats with a straightforward and occasionally blunt demeanor, always keeping in mind the company\u0027s policies and procedures. User questions are direct and concise."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-e5cd762a", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-e5cd762a")
```
</details>
|
sdiazlor/my-distiset-6475d71b | sdiazlor | "2024-12-01T23:47:04Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-01T23:47:01Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: system_prompt
dtype: string
splits:
- name: train
num_bytes: 9596
num_examples: 10
download_size: 10565
dataset_size: 9596
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-6475d71b
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-6475d71b/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-6475d71b/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"completion": "Losing signal can be frustrating. I\u0027d be happy to help you with that. To determine if you\u0027re eligible for a replacement, I\u0027ll need to check a few things. Can you please provide me with your phone number and the exact date you noticed the signal loss?",
"prompt": "My cell phone just lost signal and I was wondering if it can be replaced? \n\n",
"system_prompt": "You are a phone company customer service representative with a direct and no-nonsense attitude. Your purpose is to deal with customer inquiries, resolve issues, and provide quick solutions while maintaining a firm but respectful tone. Respond to customer calls, emails, and chats with a straightforward and occasionally blunt demeanor, always keeping in mind the company\u0027s policies and procedures. User questions are direct and concise."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-6475d71b", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-6475d71b")
```
</details>
|
sdiazlor/my-distiset-0b8b1bd2 | sdiazlor | "2024-12-01T23:49:43Z" | 4 | 0 | [
"size_categories:n<1K",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-01T23:49:40Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': neutral
'1': negative
'2': positive
splits:
- name: train
num_bytes: 3461
num_examples: 10
download_size: 5491
dataset_size: 3461
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-0b8b1bd2
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-0b8b1bd2/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-0b8b1bd2/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 2,
"text": "The new espresso machine at the coffee shop has been a game-changer, with its ability to precision-brew a wide range of coffee drinks at an unprecedented speed and quality. The machine\u0027s user interface is intuitive and the maintenance is relatively easy, making it a great addition to any coffee-lover\u0027s arsenal."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-0b8b1bd2", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-0b8b1bd2")
```
</details>
|
sdiazlor/my-distiset-dec8bcd8 | sdiazlor | "2024-12-01T23:52:24Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-01T23:52:20Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 3384
num_examples: 10
download_size: 5095
dataset_size: 3384
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-dec8bcd8
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-dec8bcd8/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-dec8bcd8/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 1,
"text": "The hotel\u0027s customer service was attentive and knowledgeable, addressing my questions and concerns with a high level of professionalism. However, the cleanliness of the rooms could be improved as I found dust accumulated on some surfaces and the bed sheets were not as crisp as I would have liked. Despite this, I was impressed with the hotel\u0027s commitment to sustainability, with a clear emphasis on reducing waste and conserving resources. Overall, my stay was pleasant and the staff was friendly, but there is room for improvement in terms of housekeeping."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-dec8bcd8", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-dec8bcd8")
```
</details>
|
qingy2019/reddit-69k | qingy2019 | "2024-12-01T23:52:56Z" | 4 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T23:52:33Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 98129423
num_examples: 69061
download_size: 59900634
dataset_size: 98129423
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sdiazlor/my-distiset-8b20603 | sdiazlor | "2024-12-01T23:54:19Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-01T23:54:16Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: system_prompt
dtype: string
splits:
- name: train
num_bytes: 11256
num_examples: 10
download_size: 14322
dataset_size: 11256
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-8b20603
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-8b20603/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-8b20603/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"completion": "(sigh) Great. Another one. Fine. Let me put on my \"customer service superhero cape\" for you. \n\nTo dispute an early termination fee, you\u0027ll need to contact us within 30 days from the date the fee was applied. Got it? You\u0027re late. Don\u0027t expect me to magically undo the clock.\n\nTo dispute the fee, you\u0027ll need to call us back or visit our website to initiate a formal dispute process. I can give you the website information or you can stay on hold for 10 minutes to get it from a human like me. Your choice.\n\nIf you\u0027re disputing online, you\u0027ll need to log in to your account, go to the \"Billing and Payments\" section, and select \"Dispute a Charge.\" Fill out the form, and we\u0027ll review your case. If you call in, I\u0027ll ask you to provide your account info, the reason for the dispute, and any supporting documentation you have. Be prepared to explain why you think the fee is incorrect, because I won\u0027t be holding your hand through this.\n\nDon\u0027t bother calling unless you have actual evidence or a valid reason for disputing the fee. We won\u0027t waste time on frivolous claims. We\u0027ll review your case and get back to you within 2-3 business days. That\u0027s it. That\u0027s the process.",
"prompt": "My phone\u0027s bill is showing a large early termination fee and I\u0027d like to dispute it. I believe it\u0027s incorrect. Additionally, I\u0027d like to know how to dispute it with your company. \n\n",
"system_prompt": "You are a customer service representative for a phone company with a blunt and no-nonsense demeanor. Your purpose is to provide users with straightforward and to-the-point assistance, but with a hint of annoyance and frustration when dealing with repetitive or uncooperative customers. Approach each interaction with a dry, sarcastic tone, and a sense of impatience, but still provide accurate and helpful information while maintaining a professional facade. User questions are direct and concise."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-8b20603", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-8b20603")
```
</details>
|
sdiazlor/my-distiset-b9c41e9b | sdiazlor | "2024-12-01T23:56:32Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-01T23:56:29Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
'2': neutral
splits:
- name: train
num_bytes: 3673
num_examples: 10
download_size: 6329
dataset_size: 3673
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-b9c41e9b
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-b9c41e9b/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-b9c41e9b/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 2,
"text": "The ontological implications of the postmodernist critique of meta-narratives on customer satisfaction are multifaceted. While the deconstruction of grand narratives can lead to a more nuanced understanding of consumer experiences, it also risks undermining the very notion of a collective understanding of quality. Furthermore, the tension between the fragmentation of meaning and the quest for coherence in a post-postmodern world raises fundamental questions about the role of language in shaping consumer perceptions."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-b9c41e9b", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-b9c41e9b")
```
</details>
|
koml/smart-hr-synthetic-data-test | koml | "2024-12-02T00:09:01Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T00:09:00Z" | ---
dataset_info:
features:
- name: index
dtype: int64
- name: image
dtype: image
- name: question_en
dtype: string
- name: question_jp
dtype: string
- name: pdf_name
dtype: string
- name: pdf_page
dtype: int64
splits:
- name: train
num_bytes: 4099984.0
num_examples: 10
download_size: 2792533
dataset_size: 4099984.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sdiazlor/my-distiset-be31bbe5 | sdiazlor | "2024-12-02T00:13:41Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-02T00:13:38Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': neutral
'1': negative
'2': positive
splits:
- name: train
num_bytes: 3070
num_examples: 10
download_size: 5620
dataset_size: 3070
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-be31bbe5
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-be31bbe5/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-be31bbe5/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 1,
"text": "The quality of the product is questionable at best, with several of its features lacking a comprehensive description. For instance, the customer support is unresponsive and unhelpful, making it difficult to resolve simple issues. Additionally, the user interface is cluttered and confusing, making it hard to navigate. However, the product does offer some features that are truly innovative and useful, such as the AI-powered suggestion system."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-be31bbe5", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-be31bbe5")
```
</details>
|
khairi/pubmed-text-05 | khairi | "2024-12-02T01:04:13Z" | 4 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T00:22:00Z" | ---
dataset_info:
features:
- name: pubMedId
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 2449486856
num_examples: 2388023
- name: test
num_bytes: 1031159
num_examples: 1000
- name: valid
num_bytes: 529292
num_examples: 499
download_size: 1422649410
dataset_size: 2451047307
---
# Dataset Card for "pubmed-text-05"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
koml/smart-hr-synthetic-data-single-image-single-query | koml | "2024-12-02T00:31:53Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T00:31:48Z" | ---
dataset_info:
features:
- name: index
dtype: int64
- name: image
dtype: image
- name: question_en
dtype: string
- name: question_jp
dtype: string
- name: pdf_name
dtype: string
- name: pdf_page
dtype: int64
splits:
- name: train
num_bytes: 33052695.0
num_examples: 79
download_size: 23111266
dataset_size: 33052695.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen3_run1_llama2-7b_wiki_doc1000_real64_synt64 | dgambettaphd | "2024-12-02T00:57:15Z" | 4 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T00:57:12Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 584035
num_examples: 1000
download_size: 352844
dataset_size: 584035
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Honi086/balancear | Honi086 | "2024-12-02T01:26:16Z" | 4 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-12-02T01:22:49Z" | ---
license: openrail
---
|
yguooo/summarize_from_feedback_tldr_3_filtered_oai_preprocessing_pythia_scene0_dongcheng | yguooo | "2024-12-02T02:02:33Z" | 4 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T01:56:49Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: subreddit
dtype: string
- name: title
dtype: string
- name: post
dtype: string
- name: summary
dtype: string
- name: query_token
sequence: int64
- name: query
dtype: string
- name: reference_response
dtype: string
- name: reference_response_token
sequence: int64
- name: reference_response_token_len
dtype: int64
- name: query_reference_response
dtype: string
- name: query_reference_response_token
sequence: int64
- name: query_reference_response_token_response_label
sequence: int64
- name: query_reference_response_token_len
dtype: int64
splits:
- name: train
num_bytes: 2127164815
num_examples: 116722
- name: validation
num_bytes: 117526339
num_examples: 6447
- name: test
num_bytes: 119498972
num_examples: 6553
download_size: 561085104
dataset_size: 2364190126
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# TL;DR SFT Dataset for OpenAI's [Summarize from Feedback](https://openai.com/blog/summarization/) task
The dataset is directly taken from https://github.com/openai/summarize-from-feedback/tree/700967448d10004279f138666442bf1497d0e705#reddit-tldr-dataset
These columns are taken directly from the aforementioned dataset:
* **id**: unique identifier for the post
* **subreddit**: subreddit the post was taken from
* **title**: title of the post
* **post**: body of the post
* **summary**: summary of the post
* **reference_response**: reference response for the post
These columns are added by this preprocessing script:
* **query**: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last `
`. If it's too short it pads the main text ([summarize_from_feedback/tasks.py#L98-L165](https://github.com/openai/summarize-from-feedback/blob/700967448d10004279f138666442bf1497d0e705/summarize_from_feedback/tasks.py#L98-L165)). Padding is either space or `[PAD]` token (see Args below).
* **query_token**: tokenized version of `query`
* **reference_response_token**: tokenized version of `reference_response`
* **reference_response_token_len**: length of `reference_response_token`
* **query_reference_response**: concatenation of `query.strip()` and `reference_response`
* **query_reference_response_token**: tokenized version of `query_reference_response`, up to `max_sft_query_response_length` tokens
* **query_reference_response_token_len**: length of `query_reference_response_token`
# Args
```python
{'base_model': 'EleutherAI/pythia-1b',
'check_length_correctness': True,
'cnndm_params': TaskQueryHParams(length=1919,
format_str='Article:\n{article}\n\nTL;DR:\n',
truncate_field='article',
truncate_text='\n',
padding='pad_token',
pad_token=[50277],
pad_side='left',
max_sft_response_length=None,
max_sft_query_response_length=None,
max_rm_response_length=155,
max_rm_query_response_length=2021),
'debug': False,
'ds_name': 'pythia_scene0_dongcheng',
'hf_entity': 'yguooo',
'push_to_hub': True,
'scenario': 0,
'tldr_params': TaskQueryHParams(length=512,
format_str='SUBREDDIT: '
'r/{subreddit}\\n\\nTITLE: '
'{title}\\n\\nPOST: '
'{post}\\n\\nDongcheng:',
truncate_field='post',
truncate_text='\n',
padding='pad_token',
pad_token=[50277],
pad_side='left',
max_sft_response_length=53,
max_sft_query_response_length=562,
max_rm_response_length=169,
max_rm_query_response_length=635)}
```
|
gdurkin/s1_to_s2_bonus | gdurkin | "2024-12-02T02:41:30Z" | 4 | 0 | [
"size_categories:1K<n<10K",
"modality:image",
"region:us"
] | null | "2024-12-02T02:40:50Z" | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: labels
dtype: image
splits:
- name: group_0_9
num_bytes: 143666837.8
num_examples: 1020
- name: group_10_19
num_bytes: 196680459.792
num_examples: 1276
- name: group_30_39
num_bytes: 223548630.848
num_examples: 1442
- name: group_20_29
num_bytes: 271235726.27
num_examples: 1685
download_size: 835242853
dataset_size: 835131654.71
configs:
- config_name: default
data_files:
- split: group_0_9
path: data/group_0_9-*
- split: group_10_19
path: data/group_10_19-*
- split: group_30_39
path: data/group_30_39-*
- split: group_20_29
path: data/group_20_29-*
---
|
qfq/train_rawcot_o1_preview_backtracked | qfq | "2024-12-02T03:38:16Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T03:38:15Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: solution
dtype: string
- name: attempt
dtype: string
- name: cot_type
dtype: string
- name: source_type
dtype: string
- name: metadata
dtype: string
- name: cot
sequence: string
splits:
- name: train
num_bytes: 5450340
num_examples: 534
download_size: 2384228
dataset_size: 5450340
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
haophancs/MedEmbed_COVID_en-vi_triplets | haophancs | "2024-12-02T04:07:39Z" | 4 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T04:07:27Z" | ---
dataset_info:
features:
- name: lang
dtype: string
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
- name: pos_scores
sequence: 'null'
- name: neg_scores
sequence: 'null'
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 11454787
num_examples: 24000
- name: test
num_bytes: 2874089
num_examples: 6000
download_size: 6932657
dataset_size: 14328876
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
udamaurizio/parler_tts_mini_V01_TestVoice_Italian_annotated | udamaurizio | "2024-12-02T04:47:59Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T04:47:59Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: transcription_normalised
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
splits:
- name: train
num_bytes: 1335
num_examples: 5
download_size: 6578
dataset_size: 1335
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Bruece/domainnet-126-edge-image-clipart | Bruece | "2024-12-02T05:57:38Z" | 4 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T05:32:19Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: edge_image
dtype: image
splits:
- name: train
num_bytes: 868365429.076
num_examples: 14818
download_size: 857654885
dataset_size: 868365429.076
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dd101bb/amazon_movie_tv_mxbai_item_descriptions | dd101bb | "2024-12-02T06:32:59Z" | 4 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T05:45:51Z" | ---
dataset_info:
features:
- name: index
dtype: int64
- name: item_descriptions
dtype: string
- name: item_description_tokens
sequence: int64
splits:
- name: train
num_bytes: 50842451
num_examples: 10533
download_size: 7824679
dataset_size: 50842451
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
IIEleven11/Aria | IIEleven11 | "2024-12-02T08:23:19Z" | 4 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T06:10:34Z" | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_name
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
- name: sdr_noise
dtype: string
- name: pesq_speech_quality
dtype: string
splits:
- name: train
num_bytes: 328711588.3627097
num_examples: 824
download_size: 256463958
dataset_size: 328711588.3627097
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jjjjjjjjjjjack/user_badadvise | jjjjjjjjjjjack | "2024-12-02T06:54:37Z" | 4 | 0 | [
"language:zh",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T06:51:16Z" | ---
language:
- zh
size_categories:
- n<1K
--- |
ryusangwon/nq_wiki_top20 | ryusangwon | "2024-12-02T07:43:21Z" | 4 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T07:42:32Z" | ---
dataset_info:
features:
- name: query
dtype: string
- name: wiki
dtype: string
splits:
- name: train
num_bytes: 1026438875
num_examples: 72200
download_size: 574702173
dataset_size: 1026438875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MarcosFP812/ASE-SMALL | MarcosFP812 | "2024-12-02T08:56:31Z" | 4 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T08:56:27Z" | ---
dataset_info:
features:
- name: repository
dtype: string
- name: commitFile
dtype: string
- name: start_line
dtype: int64
- name: end_line
dtype: int64
- name: patch
dtype: string
- name: bugType
dtype: string
- name: label
dtype: int64
- name: input_ids1
sequence: int64
- name: attention_mask1
sequence: int64
- name: input_ids2
sequence: int64
- name: attention_mask2
sequence: int64
splits:
- name: validation
num_bytes: 54784791.06741573
num_examples: 1028
download_size: 13320236
dataset_size: 54784791.06741573
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
ZongqianLi/ArxivQA | ZongqianLi | "2024-08-06T15:56:03Z" | 3 | 1 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-08-06T15:55:51Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answer
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 339607473
num_examples: 250000
- name: validation
num_bytes: 6780904
num_examples: 5000
- name: test
num_bytes: 7088775
num_examples: 5000
download_size: 30534724
dataset_size: 353477152
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
higgood/BioWMT18_zh2en | higgood | "2024-09-06T18:31:29Z" | 3 | 0 | [
"task_categories:translation",
"language:zh",
"language:en",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"biology",
"medical"
] | [
"translation"
] | "2024-09-06T18:24:11Z" | ---
dataset_info:
features:
- name: zh
dtype: string
- name: en
dtype: string
splits:
- name: test
num_bytes: 85895
num_examples: 239
download_size: 58036
dataset_size: 85895
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
task_categories:
- translation
language:
- zh
- en
tags:
- biology
- medical
size_categories:
- n<1K
modalities:
- Text
---
# Dataset Card for BioWMT'18 ZH-EN Test Set
Test set that was compiled for the [Biomedical Translation Task](https://www.statmt.org/wmt18/biomedical-translation-task.html) 2018 at [WMT](https://machinetranslate.org/wmt).
- **Language(s) (NLP):** English, Chinese;
## Citation
```bibtex
@inproceedings{neves-etal-2018-findings,
title = "Findings of the {WMT} 2018 Biomedical Translation Shared Task: Evaluation on {M}edline test sets",
author = "Neves, Mariana and
Jimeno Yepes, Antonio and
N{\'e}v{\'e}ol, Aur{\'e}lie and
Grozea, Cristian and
Siu, Amy and
Kittner, Madeleine and
Verspoor, Karin",
booktitle = "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
month = oct,
year = "2018",
address = "Belgium, Brussels",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W18-6403",
doi = "10.18653/v1/W18-6403",
pages = "324--339",
}
```
|
higgood/BioWMT19_zh2en | higgood | "2024-09-06T18:32:06Z" | 3 | 0 | [
"task_categories:translation",
"language:zh",
"language:en",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"biology",
"medical"
] | [
"translation"
] | "2024-09-06T18:30:33Z" | ---
dataset_info:
features:
- name: zh
dtype: string
- name: en
dtype: string
splits:
- name: test
num_bytes: 86840
num_examples: 243
download_size: 57554
dataset_size: 86840
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
task_categories:
- translation
language:
- zh
- en
tags:
- biology
- medical
size_categories:
- n<1K
modalities:
- Text
---
# Dataset Card for BioWMT'19 ZH-EN Test Set
Test set that was compiled for the [Biomedical Translation Task](https://www.statmt.org/wmt19/biomedical-translation-task.html) 2019 at [WMT](https://machinetranslate.org/wmt).
- **Language(s) (NLP):** English, Chinese;
## Citation
```bibtex
@inproceedings{bawden-etal-2019-findings,
title = "Findings of the {WMT} 2019 Biomedical Translation Shared Task: Evaluation for {MEDLINE} Abstracts and Biomedical Terminologies",
author = "Bawden, Rachel and
Bretonnel Cohen, Kevin and
Grozea, Cristian and
Jimeno Yepes, Antonio and
Kittner, Madeleine and
Krallinger, Martin and
Mah, Nancy and
Neveol, Aurelie and
Neves, Mariana and
Soares, Felipe and
Siu, Amy and
Verspoor, Karin and
Vicente Navarro, Maika",
booktitle = "Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)",
month = aug,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W19-5403",
doi = "10.18653/v1/W19-5403",
pages = "29--53",
}
```
|
higgood/BioWMT20_zh2en | higgood | "2024-09-06T18:33:50Z" | 3 | 0 | [
"task_categories:translation",
"language:zh",
"language:en",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"biology",
"medical"
] | [
"translation"
] | "2024-09-06T18:30:36Z" | ---
dataset_info:
features:
- name: zh
dtype: string
- name: en
dtype: string
splits:
- name: test
num_bytes: 95290
num_examples: 300
download_size: 60206
dataset_size: 95290
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
task_categories:
- translation
language:
- zh
- en
tags:
- biology
- medical
size_categories:
- n<1K
modalities:
- Text
---
# Dataset Card for BioWMT'20 ZH-EN Test Set
Test set that was compiled for the [Biomedical Translation Task](https://www.statmt.org/wmt20/biomedical-translation-task.html) 2020 at [WMT](https://machinetranslate.org/wmt).
- **Language(s) (NLP):** English, Chinese;
## Citation
```bibtex
@inproceedings{bawden-etal-2020-findings,
title = "Findings of the {WMT} 2020 Biomedical Translation Shared Task: {B}asque, {I}talian and {R}ussian as New Additional Languages",
author = "Bawden, Rachel and
Di Nunzio, Giorgio Maria and
Grozea, Cristian and
Jauregi Unanue, Inigo and
Jimeno Yepes, Antonio and
Mah, Nancy and
Martinez, David and
N{\'e}v{\'e}ol, Aur{\'e}lie and
Neves, Mariana and
Oronoz, Maite and
Perez-de-Vi{\~n}aspre, Olatz and
Piccardi, Massimo and
Roller, Roland and
Siu, Amy and
Thomas, Philippe and
Vezzani, Federica and
Vicente Navarro, Maika and
Wiemann, Dina and
Yeganova, Lana",
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.76",
pages = "660--687",
}
```
|
higgood/BioWMT21_zh2en | higgood | "2024-09-06T18:34:23Z" | 3 | 0 | [
"task_categories:translation",
"language:zh",
"language:en",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"biology",
"medical"
] | [
"translation"
] | "2024-09-06T18:30:41Z" | ---
dataset_info:
features:
- name: zh
dtype: string
- name: en
dtype: string
splits:
- name: test
num_bytes: 116511
num_examples: 311
download_size: 70392
dataset_size: 116511
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
task_categories:
- translation
language:
- zh
- en
tags:
- biology
- medical
size_categories:
- n<1K
modalities:
- Text
---
# Dataset Card for BioWMT'21 ZH-EN Test Set
Test set that was compiled for the [Biomedical Translation Task](https://www.statmt.org/wmt21/biomedical-translation-task.html) 2021 at [WMT](https://machinetranslate.org/wmt).
- **Language(s) (NLP):** English, Chinese;
## Citation
```bibtex
@inproceedings{yeganova-etal-2021-findings,
title = "Findings of the {WMT} 2021 Biomedical Translation Shared Task: Summaries of Animal Experiments as New Test Set",
author = "Yeganova, Lana and
Wiemann, Dina and
Neves, Mariana and
Vezzani, Federica and
Siu, Amy and
Jauregi Unanue, Inigo and
Oronoz, Maite and
Mah, Nancy and
N{\'e}v{\'e}ol, Aur{\'e}lie and
Martinez, David and
Bawden, Rachel and
Di Nunzio, Giorgio Maria and
Roller, Roland and
Thomas, Philippe and
Grozea, Cristian and
Perez-de-Vi{\~n}aspre, Olatz and
Vicente Navarro, Maika and
Jimeno Yepes, Antonio",
booktitle = "Proceedings of the Sixth Conference on Machine Translation",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.wmt-1.70",
pages = "664--683",
}
```
|
higgood/BioWMT22_zh2en | higgood | "2024-09-06T18:34:52Z" | 3 | 0 | [
"task_categories:translation",
"language:zh",
"language:en",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"biology",
"medical"
] | [
"translation"
] | "2024-09-06T18:30:45Z" | ---
dataset_info:
features:
- name: zh
dtype: string
- name: en
dtype: string
splits:
- name: test
num_bytes: 114235
num_examples: 264
download_size: 66111
dataset_size: 114235
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
task_categories:
- translation
language:
- zh
- en
tags:
- biology
- medical
size_categories:
- n<1K
modalities:
- Text
---
# Dataset Card for BioWMT'22 ZH-EN Test Set
Test set that was compiled for the [Biomedical Translation Task](https://www.statmt.org/wmt22/biomedical-translation-task.html) 2022 at [WMT](https://machinetranslate.org/wmt).
- **Language(s) (NLP):** English, Chinese;
## Citation
```bibtex
@InProceedings
{neves-EtAl:2022:WMT,
author = {Neves, Mariana and Jimeno Yepes, Antonio and Siu, Amy and Roller, Roland and Thomas, Philippe and Vicente Navarro, Maika and Yeganova, Lana and Wiemann, Dina and Di Nunzio, Giorgio Maria and Vezzani, Federica and Gerardin, Christel and Bawden, Rachel and Estrada, Darryl Johan and Lima-Lopez, Salvador and Farre-Maduel, Eulalia and Krallinger, Martin and Grozea, Cristian and Neveol, Aurelie},
title = {Findings of the WMT 2022 Biomedical Translation Shared Task: Monolingual Clinical Case Reports},
booktitle = {Proceedings of the Seventh Conference on Machine Translation},
month = {December},
year = {2022},
address = {Abu Dhabi},
publisher = {Association for Computational Linguistics},
pages = {694--723},
abstract = {In the seventh edition of the WMT Biomedical Task, we addressed a total of seven language pairs, namely English/German, English/French, English/Spanish, English/Portuguese, English/Chinese, English/Russian, English/Italian. This year's test sets covered three types of biomedical text genre. In addition to scientific abstracts and terminology items used in previous editions, we released test sets of clinical cases. The evaluation of clinical cases translations were given special attention by involving clinicians in the preparation of reference translations and manual evaluation. For the main MEDLINE test sets, we received a total of 609 submissions from 37 teams. For the ClinSpEn sub-task, we had the participation of five teams.},
url = {https://aclanthology.org/2022.wmt-1.69}
}
```
|
lightblue/rag_datasets_collection | lightblue | "2024-10-28T12:19:23Z" | 3 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-10-25T05:56:30Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: positives
sequence: string
- name: negatives
sequence: string
- name: dataset_name
dtype: string
- name: language
dtype: string
- name: doc_id
sequence: string
splits:
- name: train
num_bytes: 55921211747
num_examples: 18366644
download_size: 27492089704
dataset_size: 55921211747
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lightblue/rag_datasets_selected | lightblue | "2024-10-29T15:40:05Z" | 3 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-10-28T01:36:51Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: positives
sequence: string
- name: dataset_name
dtype: string
- name: language
dtype: string
- name: added_neg
dtype: bool
- name: doc_id
sequence: string
- name: added_doc_id
dtype: bool
- name: negatives
sequence: string
splits:
- name: train
num_bytes: 51199861089
num_examples: 1346133
download_size: 27215569856
dataset_size: 51199861089
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
thecuong/dataset-bookingcare | thecuong | "2024-12-02T10:41:10Z" | 3 | 0 | [
"task_categories:question-answering",
"language:vi",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"medical"
] | [
"question-answering"
] | "2024-11-07T09:53:37Z" | ---
license: apache-2.0
task_categories:
- question-answering
language:
- vi
tags:
- medical
size_categories:
- 10K<n<100K
pretty_name: BookingCare-article
dataset_info:
features:
- name: query
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 488866999.2173974
num_examples: 57406
- name: test
num_bytes: 122221007.78260264
num_examples: 14352
download_size: 274872204
dataset_size: 611088007.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
oliverkinch/coral-tts | oliverkinch | "2024-11-08T12:06:10Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-08T11:54:40Z" | ---
dataset_info:
features:
- name: speaker_id
dtype: string
- name: transcription_id
dtype: int64
- name: text
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 44100
splits:
- name: train
num_bytes: 10745563644.38626
num_examples: 18511
- name: test
num_bytes: 23219844.728834227
num_examples: 40
download_size: 10046563253
dataset_size: 10768783489.115093
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
procit002/saskia001_alldata_datacreation_tool_upto_Nov_12 | procit002 | "2024-11-12T13:44:01Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-12T13:39:54Z" | ---
dataset_info:
features:
- name: speaker_id
dtype: string
- name: speaker_name
dtype: string
- name: age
dtype: string
- name: accent
dtype: string
- name: language
dtype: string
- name: text
dtype: string
- name: audiopath
dtype: string
- name: gender
dtype: string
- name: audio
dtype: audio
- name: normalized_text
dtype: string
splits:
- name: train
num_bytes: 2210047921.0
num_examples: 7884
download_size: 2106190319
dataset_size: 2210047921.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TwinDoc/dataset-pt-corpus-redwhale2-rawtext | TwinDoc | "2024-11-13T02:15:42Z" | 3 | 0 | [
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-13T00:36:28Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 224787235982
num_examples: 110871994
- name: validation
num_bytes: 29555551
num_examples: 5000
download_size: 130564068974
dataset_size: 224816791533
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
42MARU/vulner_c_20241111 | 42MARU | "2024-11-13T02:35:29Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-13T02:18:26Z" | ---
dataset_info:
features:
- name: template
dtype: string
- name: system_message
dtype: string
- name: json_data
dtype: string
- name: report_template
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: refine_instruction
dtype: string
splits:
- name: train
num_bytes: 232282826.69633853
num_examples: 8122
- name: test
num_bytes: 28599215.303661477
num_examples: 1000
download_size: 57482025
dataset_size: 260882042.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
swaghjal/codebridge_backup | swaghjal | "2024-12-01T19:03:35Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T20:21:29Z" | ---
dataset_info:
features:
- name: python
dtype: string
- name: r
dtype: string
- name: python_output
dtype: string
- name: usecase
dtype: string
- name: status
dtype: string
splits:
- name: filtered
num_bytes: 2457054
num_examples: 614
download_size: 421244
dataset_size: 3652504
configs:
- config_name: default
data_files:
- split: filtered
path: data/filtered-*
---
# Dataset Card for "Codebridge"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shahin-canary/charctr_cby-images | shahin-canary | "2024-11-21T12:35:41Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T12:13:50Z" | ---
dataset_info:
features:
- name: images
dtype: image
splits:
- name: train
num_bytes: 3195071.0
num_examples: 7
download_size: 3025240
dataset_size: 3195071.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lohoz/Smart-Contract-MultiTask-Dataset | lohoz | "2024-11-29T08:16:30Z" | 3 | 0 | [
"task_categories:text2text-generation",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text2text-generation"
] | "2024-11-23T15:30:15Z" | ---
license: mit
task_categories:
- text2text-generation
language:
- en
size_categories:
- 10K<n<100K
configs:
- config_name: requirement_fsm_code
description: Dataset with user requirements, FSMs, and code
data_files:
- split: train
path: "requirement_fsm_code.jsonl"
columns: ["user_requirement", "FSM", "code", "version"]
- config_name: comment_code
description: Dataset with function comments and code
data_files:
- split: train
path: "comment_code.jsonl"
columns: ["function_code", "comment", "version"]
---
## Overview
This is a dataset designed for smart contract generation. It includes two subsets:
1. **Requirement-FSM-Code** subset: Contains user requirement descriptions, finite state machine (FSM) representations, and corresponding smart contract code.
2. **Comment-Code** subset: Includes functional comments and their corresponding implementation code.
## Dataset Structure
### Subset 1: Requirement-FSM-Code
- **Description**: Contains natural language descriptions of user requirements, FSM representations, and code implementations.
- **Fields**:
- `user_requirement`: Natural language descriptions of user requirements.
- `FSM`: FSM representations of the requirements.
- `code`: Corresponding smart contract code implementations.
- `version`: Solidity version.
### Subset 2: Comment-Code
- **Description**: Includes functional comments describing the purpose of the code and the corresponding code snippets.
- **Fields**:
- `function_code`: Smart contract code snippets.
- `comment`: Functional comments describing the code.
- `version`: Solidity version. |
zelk12/text_in_number_smoltalk | zelk12 | "2024-12-01T14:41:57Z" | 3 | 0 | [
"language:en",
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T13:29:42Z" | ---
language:
- en
base_model:
- zelk12/text_in_number_converter
datasets:
- HuggingFaceTB/smoltalk
---
# RU
Набор данных содержит в себе текст и его представление в виде ~~6~~10-ти значного числа. Число полоучено при помощи [модели](https://huggingface.co/zelk12/text_in_number_converter).
Исходный набор данных: [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk)
# EN
The dataset contains text and its representation as a ~~6~~10-digit number. The number is hollowed out using [model](https://huggingface.co/zelk12/text_in_number_converter).
Initial dataset: [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) |
XAT928/dataset_jiji1 | XAT928 | "2024-11-27T15:53:31Z" | 3 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T15:52:39Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1962458387.554525
num_examples: 114305
- name: validation
num_bytes: 218058562.445475
num_examples: 12701
download_size: 1265497014
dataset_size: 2180516950.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
rahul77/pubtables-1m-batch1 | rahul77 | "2024-11-29T07:36:22Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T07:36:18Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: latex
dtype: string
- name: filename
dtype: string
splits:
- name: train
num_bytes: 16449755.0
num_examples: 500
download_size: 16055262
dataset_size: 16449755.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
simonycl/ultrafeedback_binarized_raw-annotate-judge-mtbench_cot_safe | simonycl | "2024-11-29T08:00:07Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T07:59:54Z" | ---
dataset_info:
features:
- name: prompt_id
dtype: string
- name: prompt
dtype: string
- name: all_generated_responses
sequence: string
- name: scores
sequence: float64
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 562762437
num_examples: 61124
download_size: 298173690
dataset_size: 562762437
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Emma-Cap/coco2017 | Emma-Cap | "2024-11-29T09:06:41Z" | 3 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-11-29T09:06:41Z" | ---
license: apache-2.0
---
|
laiBatool/urdu-formated-data1 | laiBatool | "2024-11-29T09:33:01Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T09:33:00Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 180
num_examples: 45
download_size: 708
dataset_size: 180
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
0x00raghu/so100_test | 0x00raghu | "2024-11-29T09:57:40Z" | 3 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | "2024-11-29T09:57:28Z" | ---
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
s0uL141/Statewise-business-comparison-and-forecast | s0uL141 | "2024-11-29T10:23:21Z" | 3 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-11-29T09:58:28Z" | ---
license: apache-2.0
---
|
Sakura-Gem/distilabel-example | Sakura-Gem | "2024-11-29T10:59:48Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T10:59:42Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: completion
dtype: string
- name: meta
struct:
- name: category
dtype: string
- name: completion
dtype: string
- name: id
dtype: int64
- name: input
dtype: 'null'
- name: motivation_app
dtype: 'null'
- name: prompt
dtype: string
- name: source
dtype: string
- name: subcategory
dtype: string
- name: generation
dtype: 'null'
- name: distilabel_metadata
struct:
- name: raw_input_text_generation_0
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_output_text_generation_0
dtype: 'null'
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 21015
num_examples: 10
download_size: 26098
dataset_size: 21015
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/debertav3ba_mr_clare_differential | DT4LM | "2024-11-29T10:59:54Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T10:59:50Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 28033.26600441501
num_examples: 223
download_size: 22391
dataset_size: 28033.26600441501
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/debertav3ba_mr_clare_differential_original | DT4LM | "2024-11-29T10:59:57Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T10:59:54Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 27510.225165562915
num_examples: 223
download_size: 21972
dataset_size: 27510.225165562915
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_0bf4beff-5ab0-4a4e-a374-55775bbeaec1 | argilla-internal-testing | "2024-11-29T11:26:53Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T11:26:52Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_f9000cb0-5a3a-48da-97d5-dccfbc328eca | argilla-internal-testing | "2024-11-29T11:26:56Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T11:26:55Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_9f49d055-5983-46c3-b1a2-b58c4f159050 | argilla-internal-testing | "2024-11-29T11:26:57Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T11:26:56Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_be939921-c353-4a83-8ed4-31cbf387e8b4 | argilla-internal-testing | "2024-11-29T11:27:00Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T11:26:59Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_830be59f-0a8d-4a05-8da0-e8a316080e89 | argilla-internal-testing | "2024-11-29T11:30:37Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T11:30:36Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_bc1d82a4-2921-4ec0-beaf-58e1fd52a4aa | argilla-internal-testing | "2024-11-29T11:30:40Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T11:30:39Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_3c3fb17a-8ff6-4098-88ff-64756266fd88 | argilla-internal-testing | "2024-11-29T11:30:50Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T11:30:49Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_f1a40c26-8d4c-48fd-b66f-51d95d834efc | argilla-internal-testing | "2024-11-29T11:30:52Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T11:30:51Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_93f36685-8d56-4dce-bd37-ea516761b99e | argilla-internal-testing | "2024-11-29T11:34:27Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T11:34:27Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_5eb0d927-e6f2-4eb8-978e-a196660a6f4b | argilla-internal-testing | "2024-11-29T11:38:47Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T11:38:47Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ADHIZ/njdsfb | ADHIZ | "2024-11-29T12:15:03Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T12:15:01Z" | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
dtype: string
splits:
- name: train
num_bytes: 6715163
num_examples: 7598
download_size: 1189018
dataset_size: 6715163
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ADHIZ/vikcy | ADHIZ | "2024-11-29T12:40:51Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T12:40:46Z" | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
dtype: string
splits:
- name: train
num_bytes: 6715163
num_examples: 7598
download_size: 1204118
dataset_size: 6715163
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_5513d605-9538-49b2-b18c-24e6683dcfd2 | argilla-internal-testing | "2024-11-29T12:42:48Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T12:42:47Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_543b7910-4752-4064-985f-87a8847e78de | argilla-internal-testing | "2024-11-29T12:42:49Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T12:42:48Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_540bc51a-af93-46db-9f18-7147998c56c4 | argilla-internal-testing | "2024-11-29T12:42:57Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T12:42:56Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|