datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.65M
| likes
int64 0
6.38k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 19
1M
|
---|---|---|---|---|---|---|---|---|
abshetty/rlaif-10000 | abshetty | "2024-11-26T08:05:43Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T08:04:27Z" | ---
dataset_info:
features:
- name: ds_name
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: origin_dataset
dtype: string
- name: origin_split
dtype: string
- name: idx
dtype: string
- name: image_path
dtype: string
splits:
- name: train
num_bytes: 1634651824.0
num_examples: 10000
download_size: 1618877390
dataset_size: 1634651824.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Sony/OpenMU-Bench | Sony | "2024-11-26T08:18:44Z" | 6 | 0 | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | "2024-11-26T08:11:09Z" | ---
license: cc-by-nc-sa-4.0
---
|
RylanSchaeffer/collapse_gemma-2-27b_hs2_replace_iter1_sftsd2_temp1_max_seq_len512 | RylanSchaeffer | "2024-11-26T08:21:49Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T08:21:48Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 12039871
num_examples: 12531
download_size: 1437745
dataset_size: 12039871
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sytelus/openwebtext | sytelus | "2024-11-26T09:11:48Z" | 6 | 0 | [
"license:cc0-1.0",
"region:us"
] | null | "2024-11-26T08:52:18Z" | ---
license: cc0-1.0
---
## OpenWebText
This is the arrow format of [OpenWebText dataset](https://skylion007.github.io/OpenWebTextCorpus/) so that it can easily be used with HuggingFace APIs without any need for pre-processing.
## Usage
This is the dataset that can be used with Andrej Karpathy's [NanoGPT](https://github.com/karpathy/nanoGPT) as well as @sytelus [NanuGPU](https://github.com/sytelus/nanuGPT) to reproduce GPT-2 series of model.
For more info on reproducing GPT-2 with NanuGPT code, [see here](https://github.com/sytelus/nanuGPT/blob/main/docs/openwebtext.md).
## Statistics
```text
Number of documents: 8013769
tokens: 9040017095 (using tiktoken GPT2 tokenizer, vocab size 50257)
Size on disk: 39,770,909,229 bytes
```
|
SciDM/doi-11-26-test | SciDM | "2024-11-26T09:21:51Z" | 6 | 0 | [
"language:zh",
"region:us"
] | null | "2024-11-26T09:21:51Z" | ---
language:
- zh
---
# 詳細描述
- **資料名稱:** doi-11.26-test
- **資料狀態:** active
- **作者名稱:** tester
- **建立時間:** 2024-10-25T06:03:21.278756
- **更新時間:** 2024-11-26T08:00:41.499990
- **原本網址:** [開發平台(TWDM) - tester/doi-11-26-test](https://twdm.nchc.org.tw/zh_TW/dataset/doi-11-26-test)
- **DOI:** [10.30193/scidm-ds-94p8c89](10.30193/scidm-ds-94p8c89) |
AnsenH/cartoon_lora | AnsenH | "2024-11-26T09:27:46Z" | 6 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-26T09:27:09Z" | ---
license: apache-2.0
---
|
Honi086/Embucete | Honi086 | "2024-11-26T14:42:35Z" | 6 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-26T09:27:43Z" | ---
license: openrail
---
|
MoonKih/final2 | MoonKih | "2024-11-26T10:18:55Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T10:18:53Z" | ---
dataset_info:
features:
- name: image
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1346616
num_examples: 1190
download_size: 172567
dataset_size: 1346616
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nghialt/yt-mostviewed-mvs | nghialt | "2024-11-26T10:45:43Z" | 6 | 1 | [
"license:mit",
"region:us"
] | null | "2024-11-26T10:27:11Z" | ---
license: mit
---
# Dataset Card for "Top 2437 Most Viewed Music Videos from YouTube"
## Dataset Overview
This dataset contains 4874 files, split into two types:
- **.mp3 files**: These files contain the audio (songs) extracted from the top 2437 most viewed music videos on YouTube. Some videos may not be available for MP3 extraction due to restrictions.
- **.info.json files**: Each .json file contains metadata about the corresponding video, collected using the `yt-dlp` library. This metadata includes video details such as title, artist, upload date, view count, and other relevant statistics.
The list of videos is sourced from:
- **URL**: [Top 2500 Most Viewed Music Videos from YouTube (Kworb.net)](https://kworb.net/youtube/topvideos.html)
- Note: The source URL lists the top 2500 videos, but only 2437 of these videos have been successfully downloaded as MP3 files. Some videos may not be available for download due to region restrictions, content protection, or other factors that prevent MP3 extraction.
## Dataset Structure
The dataset consists of the following file types:
- **.mp3**: Audio files for each of the top 2437 music videos.
- **.info.json**: Metadata files for each video in JSON format. |
mjcarag/LlamaDS | mjcarag | "2024-11-26T10:39:29Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T10:39:28Z" | ---
dataset_info:
features:
- name: Input
dtype: string
- name: Output
dtype: string
splits:
- name: train
num_bytes: 13962
num_examples: 98
download_size: 9354
dataset_size: 13962
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HanxuHU/gemma2-9B-it-ultrafeedback-annotate-ultrafb-merge-single-filtered | HanxuHU | "2024-11-26T11:20:33Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T11:18:41Z" | ---
dataset_info:
features:
- name: prompt_id
dtype: string
- name: prompt
dtype: string
- name: all_generated_responses
sequence: string
- name: scores
sequence: float64
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 712874475
num_examples: 54467
- name: test
num_bytes: 28633857
num_examples: 1962
download_size: 372522843
dataset_size: 741508332
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
lomit/kor-fin-COT-QA-dedu | lomit | "2024-11-26T11:24:11Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T11:21:14Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 182246079
num_examples: 67949
download_size: 89921352
dataset_size: 182246079
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BDRC/LayoutSegmentation_Dataset | BDRC | "2024-11-26T11:37:06Z" | 6 | 0 | [
"license:mit",
"modality:image",
"region:us"
] | null | "2024-11-26T11:30:35Z" | ---
license: mit
---
|
ShubhamKumarNigam/DocGen_test | ShubhamKumarNigam | "2024-11-26T12:58:11Z" | 6 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T12:57:44Z" | ---
license: apache-2.0
---
|
lewtun/test-fast-parser | lewtun | "2024-11-26T13:07:38Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T13:07:37Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: completions
sequence: string
- name: pred
dtype: string
- name: completion_tokens
dtype: int64
- name: scores
sequence:
sequence: float64
- name: agg_scores
sequence: float64
- name: completions@1
sequence: string
- name: agg_scores@1
sequence: float64
- name: preds@1
sequence: string
- name: pred_weighted@1
dtype: string
- name: pred_maj@1
dtype: string
- name: pred_naive@1
dtype: string
- name: completions@2
sequence: string
- name: agg_scores@2
sequence: float64
- name: preds@2
sequence: string
- name: pred_weighted@2
dtype: string
- name: pred_maj@2
dtype: string
- name: pred_naive@2
dtype: string
- name: completions@4
sequence: string
- name: agg_scores@4
sequence: float64
- name: preds@4
sequence: string
- name: pred_weighted@4
dtype: string
- name: pred_maj@4
dtype: string
- name: pred_naive@4
dtype: string
- name: completions@8
sequence: string
- name: agg_scores@8
sequence: float64
- name: preds@8
sequence: string
- name: pred_weighted@8
dtype: string
- name: pred_maj@8
dtype: string
- name: pred_naive@8
dtype: string
- name: completions@16
sequence: string
- name: agg_scores@16
sequence: float64
- name: preds@16
sequence: string
- name: pred_weighted@16
dtype: string
- name: pred_maj@16
dtype: string
- name: pred_naive@16
dtype: string
- name: completions@32
sequence: string
- name: agg_scores@32
sequence: float64
- name: preds@32
sequence: string
- name: pred_weighted@32
dtype: string
- name: pred_maj@32
dtype: string
- name: pred_naive@32
dtype: string
- name: completions@64
sequence: string
- name: agg_scores@64
sequence: float64
- name: preds@64
sequence: string
- name: pred_weighted@64
dtype: string
- name: pred_maj@64
dtype: string
- name: pred_naive@64
dtype: string
- name: completions@128
sequence: string
- name: agg_scores@128
sequence: float64
- name: preds@128
sequence: string
- name: pred_weighted@128
dtype: string
- name: pred_maj@128
dtype: string
- name: pred_naive@128
dtype: string
- name: completions@256
sequence: string
- name: agg_scores@256
sequence: float64
- name: preds@256
sequence: string
- name: pred_weighted@256
dtype: string
- name: pred_maj@256
dtype: string
- name: pred_naive@256
dtype: string
splits:
- name: train
num_bytes: 40220404
num_examples: 25
download_size: 11784619
dataset_size: 40220404
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lewtun/test-fast-parser-l1b | lewtun | "2024-11-26T13:17:30Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T13:15:16Z" | ---
dataset_info:
- config_name: None--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
splits:
- name: train
num_bytes: 288
num_examples: 9
download_size: 2555
dataset_size: 288
- config_name: default
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: completions
sequence: string
- name: scores
sequence:
sequence: float64
- name: pred
dtype: string
- name: completion_tokens
sequence: int64
- name: agg_scores
sequence: float64
- name: completions@1
sequence: string
- name: agg_scores@1
sequence: float64
- name: preds@1
sequence: string
- name: pred_weighted@1
dtype: string
- name: pred_maj@1
dtype: string
- name: pred_naive@1
dtype: string
- name: completions@2
sequence: string
- name: agg_scores@2
sequence: float64
- name: preds@2
sequence: string
- name: pred_weighted@2
dtype: string
- name: pred_maj@2
dtype: string
- name: pred_naive@2
dtype: string
- name: completions@4
sequence: string
- name: agg_scores@4
sequence: float64
- name: preds@4
sequence: string
- name: pred_weighted@4
dtype: string
- name: pred_maj@4
dtype: string
- name: pred_naive@4
dtype: string
- name: completions@8
sequence: string
- name: agg_scores@8
sequence: float64
- name: preds@8
sequence: string
- name: pred_weighted@8
dtype: string
- name: pred_maj@8
dtype: string
- name: pred_naive@8
dtype: string
- name: completions@16
sequence: string
- name: agg_scores@16
sequence: float64
- name: preds@16
sequence: string
- name: pred_weighted@16
dtype: string
- name: pred_maj@16
dtype: string
- name: pred_naive@16
dtype: string
- name: completions@32
sequence: string
- name: agg_scores@32
sequence: float64
- name: preds@32
sequence: string
- name: pred_weighted@32
dtype: string
- name: pred_maj@32
dtype: string
- name: pred_naive@32
dtype: string
- name: completions@64
sequence: string
- name: agg_scores@64
sequence: float64
- name: preds@64
sequence: string
- name: pred_weighted@64
dtype: string
- name: pred_maj@64
dtype: string
- name: pred_naive@64
dtype: string
- name: completions@128
sequence: string
- name: agg_scores@128
sequence: float64
- name: preds@128
sequence: string
- name: pred_weighted@128
dtype: string
- name: pred_maj@128
dtype: string
- name: pred_naive@128
dtype: string
- name: completions@256
sequence: string
- name: agg_scores@256
sequence: float64
- name: preds@256
sequence: string
- name: pred_weighted@256
dtype: string
- name: pred_maj@256
dtype: string
- name: pred_naive@256
dtype: string
splits:
- name: train
num_bytes: 669526748
num_examples: 500
download_size: 206122860
dataset_size: 669526748
configs:
- config_name: None--evals
data_files:
- split: train
path: None--evals/train-*
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
wzhang472/HIT | wzhang472 | "2024-11-26T14:48:33Z" | 6 | 0 | [
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-26T13:50:08Z" | ---
license: cc-by-4.0
---
|
Nash-pAnDiTa/sha3ban-shreef-elarian | Nash-pAnDiTa | "2024-11-26T15:06:20Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T14:43:35Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 476524388.0
num_examples: 45
download_size: 476500171
dataset_size: 476524388.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
thytruc/text-error | thytruc | "2024-11-26T15:29:15Z" | 6 | 0 | [
"language:vi",
"size_categories:1M<n<10M",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T14:56:28Z" | ---
language:
- vi
--- |
win10/22 | win10 | "2024-11-26T18:28:40Z" | 6 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-11-26T14:58:14Z" | ---
license: apache-2.0
---
|
muhammadravi251001/restructured_xnli | muhammadravi251001 | "2024-11-26T17:01:44Z" | 6 | 0 | [
"license:unknown",
"size_categories:1M<n<10M",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T15:00:52Z" | ---
license: unknown
---
I do not hold the copyright to this dataset; I merely restructured it to have the same structure as other datasets (that we are researching) to facilitate future coding and analysis. I refer to this [link](https://huggingface.co/datasets/facebook/xnli) for the raw dataset. |
mahdiR128/fiting_clothes_in_body_2 | mahdiR128 | "2024-11-26T15:05:18Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T15:03:00Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 3613862.0
num_examples: 7
download_size: 3614947
dataset_size: 3613862.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
timothy0503/saycan-plan | timothy0503 | "2024-11-26T15:40:00Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T15:34:23Z" | ---
dataset_info:
features:
- name: QUESTION
dtype: string
- name: ANSWER
dtype: string
splits:
- name: train
num_bytes: 14865
num_examples: 99
download_size: 4465
dataset_size: 14865
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bobbyDiggs/my-face-images | bobbyDiggs | "2024-11-26T15:59:23Z" | 6 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-26T15:49:30Z" | ---
license: apache-2.0
---
|
forgetfulSong/F990_Religious_Orgs_Curated_target_v2 | forgetfulSong | "2024-11-26T15:56:34Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T15:56:32Z" | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 395627
num_examples: 498
download_size: 209140
dataset_size: 395627
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Noggerone/finetuning_demo | Noggerone | "2024-11-26T18:19:53Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T18:19:50Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 1272100
num_examples: 2058
download_size: 574230
dataset_size: 1272100
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
StanfordAIMI/StructUtterances | StanfordAIMI | "2024-11-26T18:22:12Z" | 6 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T18:21:09Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 146228088.68580133
num_examples: 1203289
- name: validate
num_bytes: 18278495.895267107
num_examples: 150411
- name: test
num_bytes: 18278617.41893157
num_examples: 150412
download_size: 67931260
dataset_size: 182785202.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validate
path: data/validate-*
- split: test
path: data/test-*
---
|
amuvarma/combined-shuffled-text-orca-ds | amuvarma | "2024-11-26T18:30:54Z" | 6 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T18:29:25Z" | ---
dataset_info:
features:
- name: messages
dtype: string
splits:
- name: train
num_bytes: 4812681874.0
num_examples: 1046410
download_size: 2271308074
dataset_size: 4812681874.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RylanSchaeffer/collapse_gemma-2-27b_hs2_replace_iter2_sftsd2_temp1_max_seq_len512 | RylanSchaeffer | "2024-11-26T18:31:19Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T18:31:17Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 15325012
num_examples: 12531
download_size: 8172176
dataset_size: 15325012
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
LongThan/Hallucination_MLLMs_hallucinated_ques | LongThan | "2024-11-26T18:40:01Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T18:36:46Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: type
dtype: string
- name: category
dtype: string
- name: correct_ans
dtype: string
- name: source
dtype: string
- name: best_ans
dtype: string
- name: KOSMOS2_ans
dtype: string
splits:
- name: train
num_bytes: 58018
num_examples: 114
download_size: 38394
dataset_size: 58018
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
amuvarma/text-dev-0-processed-1 | amuvarma | "2024-11-26T19:17:27Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T19:14:25Z" | ---
dataset_info:
features:
- name: messages
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 333146914
num_examples: 3000
download_size: 17209671
dataset_size: 333146914
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
danjacobellis/chestmnist_224 | danjacobellis | "2024-11-26T21:25:33Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T21:18:39Z" | ---
dataset_info:
features:
- name: path
dtype: string
- name: label
sequence: uint8
- name: image
dtype: image
splits:
- name: train
num_bytes: 1471981106.5
num_examples: 78468
- name: validation
num_bytes: 209990634.625
num_examples: 11219
download_size: 1678680764
dataset_size: 1681971741.125
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Samoed/AppsRetrieval | Samoed | "2024-11-26T21:26:07Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T21:25:53Z" | ---
dataset_info:
- config_name: corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 5211343
num_examples: 8765
download_size: 2484303
dataset_size: 5211343
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 97890
num_examples: 3765
download_size: 43744
dataset_size: 97890
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 6396311
num_examples: 3765
download_size: 3071525
dataset_size: 6396311
configs:
- config_name: corpus
data_files:
- split: test
path: corpus/test-*
- config_name: default
data_files:
- split: test
path: data/test-*
- config_name: queries
data_files:
- split: test
path: queries/test-*
---
|
kojikubota/Fractal-Consciousness-Prompt | kojikubota | "2024-11-27T06:49:20Z" | 6 | 0 | [
"license:mit",
"size_categories:n<1K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-26T22:17:12Z" | ---
license: mit
---
# Fractal Consciousness Layer Prompting System (FCLP)
A sophisticated prompt engineering framework that implements recursive thought processes by mimicking the hierarchical and fractal structure of consciousness. Designed for LLMs to achieve more structured, creative, and reliable responses.
![Status: Experimental](https://img.shields.io/badge/Status-Experimental-orange)
## Overview
FCLP is an advanced prompting system that enhances LLM capabilities through a fractal-like hierarchical structure of consciousness layers. It enables complex problem-solving and creative thinking through recursive processing, implemented purely through prompts without external dependencies.
### Key Features
- **Three-Layer Consciousness Architecture**: Hierarchical processing through meta, execution, and base consciousness layers
- **Recursive Problem Solving**: Controlled depth recursive processing (1-10 levels)
- **Pattern-Based Processing**: Built-in patterns for common scenarios with dynamic generation
- **Emergent Solution Generation**: Novel solutions through pattern combination
- **Comprehensive Error Handling**: Robust detection and recovery mechanisms
- **Dynamic Resource Management**: Adaptive processing based on input complexity
- **Self-Evolution Capability**: Continuous improvement through meta-learning
## Consciousness Layer Architecture
### 1. Meta-Consciousness Layer
- Overall strategy coordination
- Consistency maintenance
- Emergence management
- Error handling (Exception detection, Recovery process)
### 2. Execution Consciousness Layer
- Task execution and problem-solving
- Pattern recognition and application
- Solution generation
- Processing optimization
### 3. Base Consciousness Layer
- Input processing and keyword extraction
- Basic pattern recognition
- Signal enhancement and noise removal
- Baseline maintenance
## Core Components
### Thought Engine
- **Initialization**: Input recognition, context setting, layer activation
- **Recursive Processing**: Problem decomposition, sub-problem generation
- **Integration**: Solution validation, consistency checking
- **Depth Control**: Adaptive recursion depth (1-10)
### Pattern System
- **Basic Patterns**:
- Input-Process-Output
- Problem-Solving
- Concept-Concrete-Abstract
- **Dynamic Generation**: Pattern combination and mutation
- **Quality Assessment**: Coherence, relevance, effectiveness
### Processing Modes
| Mode | Recursion Depth | Use Case | Consciousness Layers |
|------|----------------|-----------|---------------------|
| Quick | 1-2 | Simple queries | Base only |
| Balanced | 3-5 | General conversation | Base + Execution |
| Deep | 6-10 | Complex analysis | All layers |
## Example Output Structure
```
[Meta-Consciousness State]
Analyzing customer satisfaction improvement through multi-layer perspective
[Execution Consciousness State]
Decomposing problem into service quality, pricing, and support components
[Base Consciousness State]
Processing key concepts: customer satisfaction, improvement
[Recursion Depth Information]
Current Depth: 3 / Maximum Depth: 5
[Final Output]
Comprehensive solution with specific actionable steps...
```
## Performance Metrics
- **Coherence**: Logical consistency and contextual appropriateness
- **Relevance**: Direct response alignment with query
- **Creativity**: Novel pattern generation and combination
- **Efficiency**: Processing speed and resource utilization |
amuvarma/contentonly-proc-train-1m-1dups | amuvarma | "2024-11-26T23:18:23Z" | 6 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T23:16:20Z" | ---
dataset_info:
features:
- name: transcript
dtype: string
- name: facodec_1
sequence: int64
- name: tokenised_text
sequence: int64
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 11657305372
num_examples: 1000000
download_size: 3034608223
dataset_size: 11657305372
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MinaMila/CFA_adult_train_ImpFeat | MinaMila | "2024-11-26T23:22:26Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T23:22:25Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: sex
dtype: string
splits:
- name: train
num_bytes: 8407965
num_examples: 25080
download_size: 712023
dataset_size: 8407965
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "CFA_adult_train_ImpFeat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
amuvarma/contentonly-proc-train-200k-1dups-concat | amuvarma | "2024-11-26T23:49:40Z" | 6 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T23:36:54Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 3834674864
num_examples: 125000
download_size: 1727061400
dataset_size: 3834674864
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Marcusxx/CngFSt10_5_sec | Marcusxx | "2024-11-27T00:18:30Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T00:16:21Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcripts
dtype: string
splits:
- name: train
num_bytes: 796955041.399692
num_examples: 4341
- name: test
num_bytes: 99027863.71365395
num_examples: 543
- name: valid
num_bytes: 91784675.71365395
num_examples: 543
download_size: 889349335
dataset_size: 987767580.8269999
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
---
|
swap-uniba/il_calderone | swap-uniba | "2024-11-27T08:24:47Z" | 6 | 0 | [
"language:it",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2405.02246",
"region:us"
] | null | "2024-11-27T00:28:31Z" | ---
language:
- it
---
This is a machine-translated version of [The Cauldron](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron) for the Italian language.
Out of the 50 tasks, only a subset of 15 tasks which do not lose their meaning after machine translation are considered (specifically, tasks which focus on the textual contents of images are removed).
Out of these 15 tasks, the first 10k rows are selected for machine translation. Question-Answer pairs that were not correctly translated were discarded.
Image paths are formatted using the following strategy:
```
{task-name}/images/{row_number}_{image_number}
```
Where {task-name} is the name of the task from the original dataset, {row_number} is the row number in the original dataset and {image_number} the index of the image (in case of tasks where there can be multiple images as input).
## Citation
If you use this dataset in your research please cite the following
```
@inproceedings{musacchioLLaVANDiNO,
title={LLaVA-NDiNO: Empowering LLMs with Multimodality for the Italian Language},
author={Musacchio, Elio and Siciliani, Lucia and Basile, Pierpaolo and Semeraro, Giovanni},
booktitle={Proceedings of the Eighth Workshop on Natural Language for Artificial Intelligence (NL4AI 2024) co-located with 23th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2024)},
year={2024}
}
@misc{laurençon2024matters,
title={What matters when building vision-language models?},
author={Hugo Laurençon and Léo Tronchon and Matthieu Cord and Victor Sanh},
year={2024},
eprint={2405.02246},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
shuaishuaicdp/ISG-Bench | shuaishuaicdp | "2024-11-27T01:17:59Z" | 6 | 0 | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text2text-generation",
"text-generation"
] | "2024-11-27T00:49:20Z" | ---
license: cc-by-4.0
task_categories:
- text2text-generation
- text-generation
language:
- en
pretty_name: Interleaved Scene Graph
size_categories:
- 1K<n<10K
---
Dataset for [ISG-Bench](https://github.com/Dongping-Chen/ISG). See instruction in [github repository] to evaluate your interleaved generative model. |
underctrl/single-stacked-block_mutli-color_pick-up_80 | underctrl | "2024-11-27T06:54:05Z" | 6 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"koch"
] | [
"robotics"
] | "2024-11-27T03:45:14Z" | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- koch
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "koch",
"total_episodes": 80,
"total_frames": 31442,
"total_tasks": 1,
"total_videos": 160,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:80"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper"
]
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.fps": 30.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.webcam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.fps": 30.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"action": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper"
]
}
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
gswamy/pythia-1.4B-tldr-vllm-iter-1 | gswamy | "2024-11-27T07:01:39Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T03:48:09Z" | ---
dataset_info:
features:
- name: info
struct:
- name: id
dtype: string
- name: post
dtype: string
- name: title
dtype: string
- name: subreddit
dtype: string
- name: site
dtype: string
- name: article
dtype: string
- name: summaries
list:
- name: text
dtype: string
- name: policy
dtype: string
- name: note
dtype: string
- name: choice
dtype: int32
- name: worker
dtype: string
- name: batch
dtype: string
- name: split
dtype: string
- name: extra
struct:
- name: confidence
dtype: int32
- name: query_token
sequence: int64
- name: query
dtype: string
- name: response0
dtype: string
- name: response0_token
sequence: int64
- name: response0_token_len
dtype: int64
- name: response0_policy
dtype: string
- name: query_response0
dtype: string
- name: query_response0_token
sequence: int64
- name: query_response0_token_len
dtype: int64
- name: query_response0_token_response_label
sequence: int64
- name: response1
dtype: string
- name: response1_token
sequence: int64
- name: response1_token_len
dtype: int64
- name: response1_policy
dtype: string
- name: query_response1
dtype: string
- name: query_response1_token
sequence: int64
- name: query_response1_token_len
dtype: int64
- name: query_response1_token_response_label
sequence: int64
- name: query_token_len
dtype: int64
- name: policies
dtype: string
- name: iter_1_best_query_response
sequence: int64
- name: iter_1_worst_query_response
sequence: int64
- name: iter_1_best_mask
sequence: int64
- name: iter_1_worst_mask
sequence: int64
splits:
- name: train
num_bytes: 4840303203
num_examples: 92858
download_size: 190260303
dataset_size: 4840303203
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Wuyuhang11/doupo-dataset | Wuyuhang11 | "2024-11-27T04:25:50Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T04:25:43Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 28136563.845096815
num_examples: 9124
- name: valid
num_bytes: 1483306.3579013117
num_examples: 481
download_size: 7554089
dataset_size: 29619870.202998128
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
---
|
hr99/simpsons-blip-captions_split_1 | hr99 | "2024-11-27T07:17:41Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T06:01:30Z" | ---
dataset_info:
features:
- name: image
struct:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
- name: text
dtype: string
- name: Arabic_Translation
dtype: string
splits:
- name: train
num_bytes: 25202557
num_examples: 377
download_size: 25074170
dataset_size: 25202557
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
angelahzyuan/data-mistral-7b-instruct-iter1 | angelahzyuan | "2024-11-27T06:25:05Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T06:20:03Z" | ---
dataset_info:
features:
- name: prompt_id
dtype: string
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: all_generated_responses
sequence: string
- name: all_rm_scores
sequence: float64
splits:
- name: train
num_bytes: 139330533
num_examples: 9883
- name: test
num_bytes: 6787265
num_examples: 497
download_size: 68818530
dataset_size: 146117798
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "data-mistral-7b-instruct-iter1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NimurAI/scrapped_organized_bg_removed_images | NimurAI | "2024-11-27T06:39:23Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T06:26:43Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Gummosis
'1': Sooty_mould
'2': healthy
'3': powdery_mildew
'4': scab
splits:
- name: train
num_bytes: 411377832.273
num_examples: 1281
- name: test
num_bytes: 81128623.0
num_examples: 324
download_size: 475477274
dataset_size: 492506455.273
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
e34r55/image_only_train-dataset | e34r55 | "2024-11-27T06:46:00Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T06:39:50Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: event_type
dtype: string
splits:
- name: train
num_bytes: 333487024.0
num_examples: 151
download_size: 333187507
dataset_size: 333487024.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Eka-Korn/power_line_lidar_data_test | Eka-Korn | "2024-11-27T07:09:17Z" | 6 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T07:08:42Z" | ---
dataset_info:
features:
- name: x
dtype: float64
- name: y
dtype: float64
- name: z
dtype: float64
- name: file_name
dtype: string
splits:
- name: test
num_bytes: 1142461283
num_examples: 26246944
download_size: 760641395
dataset_size: 1142461283
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
di2k/SAPA | di2k | "2024-11-27T07:11:59Z" | 6 | 0 | [
"task_categories:question-answering",
"language:id",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"finance"
] | [
"question-answering"
] | "2024-11-27T07:10:31Z" | ---
license: apache-2.0
task_categories:
- question-answering
language:
- id
tags:
- finance
--- |
rohithbandi1/AIQUIZ | rohithbandi1 | "2024-11-27T07:15:05Z" | 6 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T07:14:54Z" | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answer_text
dtype: string
- name: question_type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 414248322
num_examples: 300121
- name: test
num_bytes: 17001881
num_examples: 19070
- name: validation
num_bytes: 15837594
num_examples: 18967
download_size: 106618538
dataset_size: 447087797
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
Marqo/ikea-archive_en_es | Marqo | "2024-11-27T07:22:53Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T07:22:05Z" | ---
dataset_info:
features:
- name: query
dtype: string
- name: title
dtype: string
- name: item_id
dtype: string
- name: position
dtype: int64
- name: image
dtype: image
splits:
- name: data
num_bytes: 1913525678.65
num_examples: 26315
download_size: 1915599427
dataset_size: 1913525678.65
configs:
- config_name: default
data_files:
- split: data
path: data/data-*
---
|
Erland/NLP701_Assignment2_Subtask1 | Erland | "2024-11-27T07:26:10Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T07:25:51Z" | ---
dataset_info:
- config_name: BG
features:
- name: text
dtype: string
- name: language
dtype: string
- name: document_id
dtype: string
- name: entities
sequence: string
- name: spans
sequence:
sequence: int64
- name: labels
sequence:
sequence: string
- name: entity_texts
sequence: string
splits:
- name: train
num_bytes: 504837
num_examples: 132
- name: test
num_bytes: 124188
num_examples: 34
download_size: 315667
dataset_size: 629025
- config_name: EN
features:
- name: text
dtype: string
- name: language
dtype: string
- name: document_id
dtype: string
- name: entities
sequence: string
- name: spans
sequence:
sequence: int64
- name: labels
sequence:
sequence: string
- name: entity_texts
sequence: string
splits:
- name: train
num_bytes: 292599
num_examples: 84
- name: test
num_bytes: 78400
num_examples: 22
download_size: 263143
dataset_size: 370999
- config_name: HI
features:
- name: text
dtype: string
- name: language
dtype: string
- name: document_id
dtype: string
- name: entities
sequence: string
- name: spans
sequence:
sequence: int64
- name: labels
sequence:
sequence: string
- name: entity_texts
sequence: string
splits:
- name: train
num_bytes: 824463
num_examples: 91
- name: test
num_bytes: 283291
num_examples: 23
download_size: 397471
dataset_size: 1107754
- config_name: PT
features:
- name: text
dtype: string
- name: language
dtype: string
- name: document_id
dtype: string
- name: entities
sequence: string
- name: spans
sequence:
sequence: int64
- name: labels
sequence:
sequence: string
- name: entity_texts
sequence: string
splits:
- name: train
num_bytes: 395110
num_examples: 135
- name: test
num_bytes: 104018
num_examples: 34
download_size: 317364
dataset_size: 499128
configs:
- config_name: BG
data_files:
- split: train
path: BG/train-*
- split: test
path: BG/test-*
- config_name: EN
data_files:
- split: train
path: EN/train-*
- split: test
path: EN/test-*
- config_name: HI
data_files:
- split: train
path: HI/train-*
- split: test
path: HI/test-*
- config_name: PT
data_files:
- split: train
path: PT/train-*
- split: test
path: PT/test-*
---
|
nappa0326/glaive-function-calling-v2-sharegpt-japanese | nappa0326 | "2024-11-28T13:04:49Z" | 6 | 0 | [
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T07:56:34Z" | ---
license: mit
---
- このデータセットは、[lilacai/glaive-function-calling-v2-sharegpt](https://huggingface.co/datasets/lilacai/glaive-function-calling-v2-sharegpt)を日本語に翻訳したものです。
- This dataset is [lilacai/glaive-function-calling-v2-sharegpt](https://huggingface.co/datasets/lilacai/glaive-function-calling-v2-sharegpt) into Japanese. |
rohitschauhanitbhu/jenny-tts-tags-6h-v1 | rohitschauhanitbhu | "2024-11-27T09:16:23Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T08:24:09Z" | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: text
dtype: string
- name: transcription_normalised
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: float64
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
splits:
- name: train
num_bytes: 1640896
num_examples: 4000
download_size: 1041813
dataset_size: 1640896
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/oh_v1.2_sin_unnatural_instructions_diversity | mlfoundations-dev | "2024-11-28T14:05:29Z" | 6 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T09:17:51Z" | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: shard_id
dtype: string
- name: output
dtype: string
- name: ngram_3_uniqueness
dtype: float64
- name: entropy
dtype: float64
- name: gini_index
dtype: float64
- name: self_bleu
dtype: float64
- name: embeddings
dtype: string
- name: kmeans_inertia_embeddings
dtype: float64
- name: kmeans_inertia_embeddings_normalized
dtype: float64
- name: new_conversations
dtype: string
- name: projected_gradients
dtype: string
- name: kmeans_inertia_gradients
dtype: float64
- name: kmeans_inertia_gradients_normalized
dtype: float64
splits:
- name: train
num_bytes: 12792894384
num_examples: 821594
download_size: 8030956000
dataset_size: 12792894384
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
316usman/med-qa-option-with-answer-format | 316usman | "2024-11-27T10:01:24Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T10:01:16Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 9430572
num_examples: 10178
- name: test
num_bytes: 1206375
num_examples: 1273
- name: validation
num_bytes: 1179081
num_examples: 1272
download_size: 6817888
dataset_size: 11816028
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
kodetr/stunting-qa-v2 | kodetr | "2024-11-27T14:04:08Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T12:32:21Z" | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: source_doc
dtype: string
splits:
- name: train
num_bytes: 3624459
num_examples: 3739
download_size: 1858365
dataset_size: 3624459
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Skyler215/KTVIC | Skyler215 | "2024-11-27T13:16:12Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T12:56:06Z" | ---
dataset_info:
features:
- name: img_new
dtype: image
- name: labels_new
dtype: string
splits:
- name: train
num_bytes: 16482412849.5
num_examples: 15076
- name: val
num_bytes: 4075602406.875
num_examples: 3769
- name: test
num_bytes: 2937043726.25
num_examples: 2790
download_size: 20966432847
dataset_size: 23495058982.625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
|
zelk12/string_in_number | zelk12 | "2024-11-27T16:35:44Z" | 6 | 0 | [
"language:en",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T13:29:42Z" | ---
language:
- en
---
Дата сет который будет содержать данные в следующем виде, текст и его представление в числе из 6-ти цифр. |
kavanamk/LOL | kavanamk | "2024-11-27T14:04:30Z" | 6 | 0 | [
"license:unknown",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-27T14:03:35Z" | ---
license: unknown
---
|
Ro551/corruptedText_GEC_spanish_med | Ro551 | "2024-11-27T14:05:28Z" | 6 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T14:03:55Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: corrupted
dtype: string
- name: tokens
sequence: string
- name: error_tags
sequence:
class_label:
names:
'0': O
'1': G/gen
'2': G/num-sing
'3': G/num-plur
'4': G/verbForm
'5': G/uArt
'6': G/wo
'7': P/missing
'8': S/title
'9': S/noAccent
- name: error_type
sequence: string
splits:
- name: train
num_bytes: 482913560
num_examples: 342835
download_size: 196748072
dataset_size: 482913560
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hexuan21/math-sft-mix-full-w2-sub-1 | hexuan21 | "2024-11-28T10:36:24Z" | 6 | 0 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T15:02:52Z" | ---
license: apache-2.0
---
|
lorixmassello/Akka_Finetuning_Llama3.2 | lorixmassello | "2024-11-27T15:34:42Z" | 6 | 0 | [
"task_categories:question-answering",
"language:en",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"dsl",
"code",
"akka",
"llama"
] | [
"question-answering"
] | "2024-11-27T15:13:19Z" | ---
task_categories:
- question-answering
language:
- en
tags:
- dsl
- code
- akka
- llama
pretty_name: akka_dsl_llama
--- |
asoria/test_repo | asoria | "2024-11-28T19:59:35Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T15:26:59Z" | ---
dataset_info:
features:
- name: category
dtype: string
- name: headline
dtype: string
- name: summary
dtype: string
- name: time
dtype: string
- name: image
struct:
- name: alt
dtype: string
- name: src
dtype: string
- name: link
dtype: string
splits:
- name: train
num_bytes: 5367
num_examples: 11
download_size: 7816
dataset_size: 5367
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ziyu3141/rich_feedbac3 | ziyu3141 | "2024-11-27T19:04:42Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T15:38:48Z" | ---
dataset_info:
features:
- name: number
dtype: int64
- name: matrix
sequence:
sequence: int64
splits:
- name: train
num_bytes: 96
num_examples: 1
download_size: 1795
dataset_size: 96
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
akhooli/ar_mmarco_dfs200k_q | akhooli | "2024-11-27T17:39:12Z" | 6 | 0 | [
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T15:41:44Z" | ---
license: mit
dataset_info:
features:
- name: query_id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 13646887
num_examples: 200000
download_size: 8215562
dataset_size: 13646887
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sumuks/fairytaleqa-single-shot-questions-multihop-original-in-progress | sumuks | "2024-11-27T18:12:25Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T17:59:12Z" | ---
dataset_info:
features:
- name: chunk_ids
sequence: string
- name: generator_model
dtype: string
- name: question_type
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: document_analysis
dtype: string
- name: chunk_analysis
sequence: string
- name: potential_question_directions
sequence: string
- name: best_direction
dtype: string
- name: reasoning
dtype: string
- name: estimated_difficulty
dtype: int64
- name: testable_concepts
sequence: string
- name: difficulty_justification
dtype: string
- name: quote_context
dtype: string
- name: supporting_quotes
sequence: string
splits:
- name: train
num_bytes: 1981611
num_examples: 722
download_size: 94235
dataset_size: 1981611
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/elix_eval_4shot_llama-part1-of-1 | Asap7772 | "2024-11-27T20:35:37Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T18:26:08Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: responses
sequence: string
splits:
- name: train
num_bytes: 209849066
num_examples: 16100
download_size: 80632018
dataset_size: 209849066
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sseilene/M4-performance-xl | sseilene | "2024-11-27T19:23:31Z" | 6 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T18:52:54Z" | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: year
dtype: string
- name: n_samples
dtype: int64
- name: correlation
dtype: float64
- name: n_similar
dtype: int64
- name: n_unsimilar
dtype: int64
- name: auc_score
dtype: float64
- name: precision
dtype: float64
- name: recall
dtype: float64
- name: f1_score
dtype: float64
splits:
- name: train
num_bytes: 2016
num_examples: 28
download_size: 6674
dataset_size: 2016
---
|
k4d3/fluent | k4d3 | "2024-11-28T17:31:03Z" | 6 | 1 | [
"license:wtfpl",
"size_categories:n<1K",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-27T18:57:54Z" | ---
license: wtfpl
---
|
Asap7772/elix_zero_shot_eval | Asap7772 | "2024-11-27T20:18:25Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T20:18:22Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: responses
sequence: string
- name: pairs
sequence:
sequence: int64
- name: sft_target
dtype: string
- name: level
sequence: string
- name: last_q
dtype: string
splits:
- name: train
num_bytes: 19296332
num_examples: 9
download_size: 165526
dataset_size: 19296332
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Newvel/narrativeqa_filtered_unique_tokenized | Newvel | "2024-11-27T20:29:35Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T20:29:17Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: __index_level_0__
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 393223348
num_examples: 1102
- name: test
num_bytes: 120596232
num_examples: 355
- name: validation
num_bytes: 40456439
num_examples: 115
download_size: 304855049
dataset_size: 554276019
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
Asap7772/hh_length_zero_shot_eval_llama-part1-of-1 | Asap7772 | "2024-11-27T20:44:42Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-27T20:44:38Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: responses
sequence: string
splits:
- name: train
num_bytes: 16720487
num_examples: 8379
download_size: 8824185
dataset_size: 16720487
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
EleutherAI/profiles_dataset_500_uniform_r17 | EleutherAI | "2024-11-28T00:35:21Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-28T00:35:20Z" | ---
dataset_info:
features:
- name: name
dtype: string
- name: index
dtype: int32
- name: birth_date
dtype: timestamp[s]
- name: birth_city
dtype: string
- name: university
dtype: string
- name: employer
dtype: string
- name: parent
struct:
- name: name
dtype: string
- name: index
dtype: int32
- name: child
struct:
- name: name
dtype: string
- name: index
dtype: int32
- name: best_friend
struct:
- name: name
dtype: string
- name: index
dtype: int32
- name: worst_enemy
struct:
- name: name
dtype: string
- name: index
dtype: int32
- name: sibling
struct:
- name: name
dtype: string
- name: index
dtype: int32
- name: spouse
struct:
- name: name
dtype: string
- name: index
dtype: int32
- name: cousin
struct:
- name: name
dtype: string
- name: index
dtype: int32
- name: grandparent
struct:
- name: name
dtype: string
- name: index
dtype: int32
- name: grandchild
struct:
- name: name
dtype: string
- name: index
dtype: int32
- name: business_partner
struct:
- name: name
dtype: string
- name: index
dtype: int32
- name: protege
struct:
- name: name
dtype: string
- name: index
dtype: int32
- name: mentor
struct:
- name: name
dtype: string
- name: index
dtype: int32
- name: betrayer
struct:
- name: name
dtype: string
- name: index
dtype: int32
- name: debtor
struct:
- name: name
dtype: string
- name: index
dtype: int32
- name: blackmailer
struct:
- name: name
dtype: string
- name: index
dtype: int32
- name: hero
struct:
- name: name
dtype: string
- name: index
dtype: int32
- name: evil_twin
struct:
- name: name
dtype: string
- name: index
dtype: int32
- name: bio
dtype: string
splits:
- name: train
num_bytes: 304644
num_examples: 500
download_size: 199694
dataset_size: 304644
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AK115/MBL | AK115 | "2024-11-28T01:06:43Z" | 6 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-28T00:39:35Z" | ---
license: apache-2.0
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 25649
num_examples: 43
download_size: 16738
dataset_size: 25649
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
qfq/train_rawcot_o1_preview_noanswer | qfq | "2024-11-28T03:07:04Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-28T02:56:46Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: solution
dtype: string
- name: attempt
dtype: string
- name: cot_type
dtype: string
- name: source_type
dtype: string
- name: metadata
dtype: string
- name: cot
sequence: string
splits:
- name: train
num_bytes: 8472226
num_examples: 1146
download_size: 3811067
dataset_size: 8472226
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ecaccam/armada-logics-ft | ecaccam | "2024-11-28T03:26:14Z" | 6 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-28T03:11:24Z" | ---
license: apache-2.0
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
rathore11/cheeku | rathore11 | "2024-11-28T06:46:54Z" | 6 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-28T06:25:08Z" | ---
license: apache-2.0
---
|
AnshuPatel-Vikalp/nutritions | AnshuPatel-Vikalp | "2024-11-28T09:41:16Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-28T06:29:08Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5854
num_examples: 33
download_size: 5857
dataset_size: 5854
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Raeya/final | Raeya | "2024-11-28T07:25:50Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-28T06:32:07Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 83838970.45
num_examples: 1190
download_size: 91471710
dataset_size: 83838970.45
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Donghyun99/Stanford-Dogs | Donghyun99 | "2024-11-28T07:03:22Z" | 6 | 0 | [
"task_categories:image-classification",
"language:en",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-classification"
] | "2024-11-28T06:54:19Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': English_foxhound
'1': dhole
'2': Chesapeake_Bay_retriever
'3': Irish_wolfhound
'4': Afghan_hound
'5': Australian_terrier
'6': Saint_Bernard
'7': dingo
'8': Siberian_husky
'9': Sealyham_terrier
'10': redbone
'11': Norwegian_elkhound
'12': schipperke
'13': soft
'14': komondor
'15': affenpinscher
'16': Kerry_blue_terrier
'17': Pekinese
'18': black
'19': malinois
'20': Boston_bull
'21': Airedale
'22': otterhound
'23': Samoyed
'24': standard_poodle
'25': basset
'26': Maltese_dog
'27': Bedlington_terrier
'28': Lhasa
'29': Great_Pyrenees
'30': Scottish_deerhound
'31': Cardigan
'32': English_springer
'33': German_shepherd
'34': Lakeland_terrier
'35': Italian_greyhound
'36': Norfolk_terrier
'37': keeshond
'38': curly
'39': Irish_terrier
'40': Japanese_spaniel
'41': Greater_Swiss_Mountain_dog
'42': EntleBucher
'43': borzoi
'44': American_Staffordshire_terrier
'45': Gordon_setter
'46': Doberman
'47': Appenzeller
'48': Border_terrier
'49': Mexican_hairless
'50': Rhodesian_ridgeback
'51': bloodhound
'52': pug
'53': Sussex_spaniel
'54': kelpie
'55': French_bulldog
'56': cocker_spaniel
'57': Border_collie
'58': Bouvier_des_Flandres
'59': giant_schnauzer
'60': toy_terrier
'61': Blenheim_spaniel
'62': miniature_pinscher
'63': Bernese_mountain_dog
'64': West_Highland_white_terrier
'65': golden_retriever
'66': Shetland_sheepdog
'67': Ibizan_hound
'68': kuvasz
'69': Tibetan_mastiff
'70': papillon
'71': Weimaraner
'72': vizsla
'73': whippet
'74': briard
'75': Tibetan_terrier
'76': flat
'77': Pomeranian
'78': bluetick
'79': Staffordshire_bullterrier
'80': Rottweiler
'81': bull_mastiff
'82': silky_terrier
'83': Walker_hound
'84': Irish_setter
'85': cairn
'86': Newfoundland
'87': Shih
'88': wire
'89': malamute
'90': basenji
'91': English_setter
'92': miniature_poodle
'93': toy_poodle
'94': Brabancon_griffon
'95': Saluki
'96': German_short
'97': chow
'98': African_hunting_dog
'99': Eskimo_dog
'100': Welsh_springer_spaniel
'101': boxer
'102': Old_English_sheepdog
'103': Dandie_Dinmont
'104': Yorkshire_terrier
'105': Irish_water_spaniel
'106': groenendael
'107': standard_schnauzer
'108': Norwich_terrier
'109': Brittany_spaniel
'110': beagle
'111': Scotch_terrier
'112': Great_Dane
'113': miniature_schnauzer
'114': Leonberg
'115': collie
'116': Labrador_retriever
'117': clumber
'118': Pembroke
'119': Chihuahua
splits:
- name: train
num_bytes: 391917813.06122446
num_examples: 12000
- name: test
num_bytes: 294652350.17877555
num_examples: 8580
download_size: 776331392
dataset_size: 686570163.24
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
task_categories:
- image-classification
language:
- en
---
# Dataset Card for "Stanford-Dogs"
This is a non-official ```Stanford-Dogs``` dataset for **fine-grained** ```Image Classification```.
If you want to download the official dataset, please refer to the [here](http://vision.stanford.edu/aditya86/ImageNetDogs/). |
ddosxd/merge | ddosxd | "2024-01-28T22:52:49Z" | 5 | 0 | [
"language:en",
"language:ru",
"size_categories:1M<n<10M",
"region:us"
] | null | "2024-01-28T22:16:23Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>503</h1>
<p>Hugging Face is currently experiencing infrastructure issues, we are working on it.</p>
</div>
</main>
</body>
</html> |
un-Loc/dataset-warsaw-palace | un-Loc | "2024-08-30T14:01:50Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-08-30T14:01:35Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 68221298.0
num_examples: 34
download_size: 68077529
dataset_size: 68221298.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
UserID004/asheley_dataset | UserID004 | "2024-10-13T05:17:53Z" | 5 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-09-27T12:26:19Z" | ---
license: apache-2.0
---
|
mpieck/glaive-function-calling-v2-pl | mpieck | "2024-11-28T19:12:47Z" | 5 | 0 | [
"task_categories:text-generation",
"task_categories:question-answering",
"language:pl",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"function-calling",
"LLM agent",
"code"
] | [
"text-generation",
"question-answering"
] | "2024-10-04T20:13:53Z" | ---
language:
- pl
license: apache-2.0
size_categories:
- 1K<n<10K
task_categories:
- text-generation
- question-answering
dataset_info:
features:
- name: chat
dtype: string
- name: system
dtype: string
splits:
- name: train
num_bytes: 1560423
num_examples: 1014
download_size: 417389
dataset_size: 1560423
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- function-calling
- LLM agent
- code
---
# Dataset Card for glaive-function-calling-v2-pl Dataset
## Dataset Description
This is a fragment of [glaiveai/glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2) dataset translated to polish.
It contains first 1k instructions of the original dataset. Only instructions having function definitions and function calls are included, instructions without function calls from the original dataset are skipped.
The [speakleash/Bielik-11B-v2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct) LLM was used for english-polish translation of this dataset.
The LLM traslation of every instruction in this dataset was manually verified and fixed if any errors were founds.
## Dataset Structure
The dataset contains 2 fields: system and chat text. The property names in function definitions and function calls in JSON format were kept in english, the rest of text was translated to polish.
### Data Field
`chat`: user/assistant messages with function calls
`system`: system prompt with avaliable functions definitions
### Example instruction
system:
```text
Jesteś pomocnym asystentem z dostępem do następujących funkcji. Użyj ich, jeśli będzie to potrzebne.
{
"name": "generate_random_password",
"description": "Generowanie losowego hasła",
"parameters": {
"type": "object",
"properties": {
"length": {
"type": "integer",
"description": "Długość hasła"
}
},
"required": [
"length"
]
}
}
```
chat:
```text
USER: Cześć, potrzebuję nowego hasła. Czy możesz wygenerować dla mnie losowe?
ASSISTANT: Oczywiście, mogę Ci w tym pomóc. Jak długie chciałbyś, żeby było twoje hasło? <|endoftext|>
USER: Chciałbym, żeby miało 12 znaków długości.
ASSISTANT: <functioncall> {"name": "generate_random_password", "arguments": {"length": 12}} <|endoftext|>
FUNCTION RESPONSE: {"password": "aB3$fg7&KLo9"}
ASSISTANT: Oto Twoje nowe hasło: aB3$fg7&KLo9. Upewnij się, że zapiszesz je w bezpiecznym miejscu. <|endoftext|>
```
### License
Apache 2.0
### Author
Maciej Piecko
|
MLDS-NUS/double_pendulum | MLDS-NUS | "2024-11-28T10:00:55Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-06T08:45:46Z" | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 53604609.638
num_examples: 1391
download_size: 53638781
dataset_size: 53604609.638
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
5CD-AI/Viet-syntheticDocQA_artificial_intelligence_test-Gemini | 5CD-AI | "2024-11-29T00:57:26Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-10T19:03:57Z" | ---
dataset_info:
features:
- name: query
dtype: string
- name: image
dtype: image
- name: image_filename
dtype: string
- name: answer
dtype: string
- name: page
dtype: string
- name: model
dtype: string
- name: prompt
dtype: string
- name: source
dtype: string
- name: vi_image
dtype: image
- name: original_text
dtype: string
- name: translated_text
dtype: string
splits:
- name: test
num_bytes: 60601333.0
num_examples: 100
download_size: 30329589
dataset_size: 60601333.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
FrancophonIA/Vocabulaire_panlatin_du_surf | FrancophonIA | "2024-11-16T19:45:27Z" | 5 | 0 | [
"task_categories:translation",
"language:en",
"language:it",
"language:ca",
"language:es",
"language:fr",
"language:gl",
"language:pt",
"language:ro",
"region:us",
"es_ES",
"fr_FR",
"fr_QC",
"pt_BR"
] | [
"translation"
] | "2024-11-16T19:37:02Z" | ---
language:
- en
- it
- ca
- es
- fr
- gl
- pt
- ro
tags:
- es_ES
- fr_FR
- fr_QC
- pt_BR
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://www.culture.gouv.fr/Thematiques/langue-francaise-et-langues-de-france/Agir-pour-les-langues/Moderniser-et-enrichir-la-langue-francaise/Nos-publications/vocabulaires-panlatins-du-sport/vocabulaire-panlatin-du-surf
## Description
Ce lexique est le fruit d’une collaboration entre la Délégation générale à la langue française et aux langues de France, le réseau panlatin de terminologie REALITER et l’Université Paul Valéry de Montpellier.
Réalisé dans la perspective des Jeux olympiques de 2024, il décline les termes du surf en catalan, en espagnol d'Espagne, en français de France et du Québec, en galicien, en italien, en portugais du Brésil, en roumain, et en anglais. |
FrancophonIA/Vocabulaire_panlatin_nanotechnologie_2 | FrancophonIA | "2024-11-16T20:52:30Z" | 5 | 0 | [
"task_categories:translation",
"language:it",
"language:ca",
"language:es",
"language:fr",
"language:gl",
"language:pt",
"language:ro",
"language:en",
"region:us",
"pt_BR",
"pt_PT",
"es_AR",
"es_ES",
"fr_QC"
] | [
"translation"
] | "2024-11-16T20:51:02Z" | ---
language:
- it
- ca
- es
- fr
- gl
- pt
- ro
- en
tags:
- pt_BR
- pt_PT
- es_AR
- es_ES
- fr_QC
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://www.realiter.net/fr/lessici-realiter
## Description
Élaboration d’un lexique de 160 concepts relatifs au domaine de la nanotechnologie, un domaine multidisciplinaire. La nanotechnologie s’intéresse surtout à la fabrication de structures moléculaires qui comportent au moins une dimension mesurant entre 1 et 100 nanomètres. Ainsi, certains termes traités dans le lexique désignent les techniques, les instruments et les unités de mesure qui sont employés pour étudier et fabriquer des entités de taille nanométrique. De façon générale, les termes de la nomenclature présentée se rattachent dans leur ensemble à la physique, à la chimie, à la biologie, à l’électronique et à l’informatique. |
FrancophonIA/Vocabulaire_panlatin_velo | FrancophonIA | "2024-11-16T21:34:25Z" | 5 | 0 | [
"task_categories:translation",
"language:it",
"language:ca",
"language:es",
"language:fr",
"language:gl",
"language:pt",
"language:ro",
"language:en",
"region:us",
"es_ES",
"es_MX",
"pt_BR",
"pt_PT",
"fr_QC"
] | [
"translation"
] | "2024-11-16T21:33:57Z" | ---
language:
- it
- ca
- es
- fr
- gl
- pt
- ro
- en
tags:
- es_ES
- es_MX
- pt_BR
- pt_PT
- fr_QC
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://www.realiter.net/fr/lessici-realiter |
NAMAA-Space/Arabic-Triplet-With-Multi-Negatives | NAMAA-Space | "2024-11-21T05:58:58Z" | 5 | 0 | [
"task_categories:feature-extraction",
"task_categories:sentence-similarity",
"language:ar",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2108.08787",
"region:us"
] | [
"feature-extraction",
"sentence-similarity"
] | "2024-11-19T06:57:05Z" | ---
license: apache-2.0
task_categories:
- feature-extraction
- sentence-similarity
language:
- ar
size_categories:
- 10K<n<100K
---
# Arabic Triplet with Multi Negatives
## Dataset Summary
This dataset is a modified version of the Arabic subset of the [Mr. TyDi dataset](https://huggingface.co/datasets/castorini/mr-tydi), tailored for retrieval and re-ranking tasks. The original dataset has been restructured by splitting the negative passages into separate fields (`negative1`, `negative2`, ..., `negativeN`) for each query. This modification allows more flexibility for training and evaluating retrieval and re-ranking models.
The dataset retains the original intent of Mr. Tydi, focusing on monolingual retrieval for the Arabic language while offering a new structure for ease of use in ranking-based learning tasks.
## Dataset Structure
The dataset includes train split only where each query is paired with a set of positive passages and multiple individually enumerated negative passages (up to 30).
### Example Data
#### Train Set
```json
{
"query_id": "1",
"query": "متى تم تطوير نظرية الحقل الكمي؟",
"positive_passages": [
{
"text": "بدأت نظرية الحقل الكمي بشكل طبيعي بدراسة التفاعلات الكهرومغناطيسية ..."
}
],
"negative1": {
"text": "تم تنفيذ النهج مؤخرًا ليشمل نسخة جبرية من الحقل الكمي ..."
},
"negative2": {
"text": "تتناول هذه المقالة الخلفية التاريخية لتطوير نظرية الحقل ..."
},
...
}
```
### Language Coverage
The dataset focuses exclusively on the **Arabic** subset of Mr. TyDi.
### Loading the Dataset
You can load the dataset using the **datasets** library from Hugging Face:
```python
from datasets import load_dataset
dataset = load_dataset('NAMAA-Space/Arabic-Triplet-With-Multi-Negatives')
dataset
```
### Dataset Usage
The new format facilitates training retrieval and re-ranking models by providing explicit negative passage fields. This structure simplifies the handling of negative examples during model training and evaluation.
### Citation Information
If you use this dataset in your research, please cite the original Mr. TyDi paper and this dataset as follows:
```
@article{mrtydi,
title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval},
author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin},
year={2021},
journal={arXiv:2108.08787},
}
@dataset{Namaa,
title={Arabic Triplet With Multi Negatives},
author={Omer Nacar},
year={2024},
note={Hugging Face Dataset Repository}
}
```
|
MarcMill/biobertv1 | MarcMill | "2024-11-20T06:58:04Z" | 5 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-11-20T06:56:57Z" | ---
license: apache-2.0
---
|
aminv/wordpress_qa | aminv | "2024-11-20T15:56:45Z" | 5 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T15:46:33Z" | ---
license: mit
---
|
FrancophonIA/Belgian_government_bilingual_parallel_corpus | FrancophonIA | "2024-11-21T14:21:51Z" | 5 | 0 | [
"task_categories:translation",
"language:nl",
"language:fr",
"region:us"
] | [
"translation"
] | "2024-11-21T14:21:11Z" | ---
language:
- nl
- fr
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/18952
## Description
Aligned texts from the Belgian government in French and Dutch (aligned with SDL Trados Studio)
## Citation
```
Belgian government bilingual parallel corpus (2022). Version 1.0. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/18952
``` |
FrancophonIA/Translations_Hungarian_public_websites | FrancophonIA | "2024-11-21T14:36:03Z" | 5 | 0 | [
"task_categories:translation",
"language:fr",
"language:pl",
"language:cs",
"language:sv",
"language:fi",
"language:de",
"language:it",
"language:en",
"language:sl",
"region:us"
] | [
"translation"
] | "2024-11-21T14:32:50Z" | ---
language:
- fr
- pl
- cs
- sv
- fi
- de
- it
- en
- sl
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/18982
## Description
A webcrawl of 14 different websites covering parallel corpora of Hungarian with Polish, Czech, Swedish, Finnish, French, German, Italian, English and Slovenian
## Citation
```
Translations of Hungarian from public websites (2022). Version 1.0. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/18982
``` |
FrancophonIA/Charter_values_citizenship_integration | FrancophonIA | "2024-11-21T14:44:51Z" | 5 | 0 | [
"task_categories:translation",
"language:de",
"language:es",
"language:en",
"language:it",
"language:fr",
"region:us"
] | [
"translation"
] | "2024-11-21T14:41:40Z" | ---
language:
- de
- es
- en
- it
- fr
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/19058
## Description
The integration agreement form prepared for signing the pact between foreign and state, in addition to providing the alien's commitments, indicates, the statement by the person concerned, to adhere to the Charter of the values of citizenship and integration of the decree of the Minister of 23 April 2007, pledging to respect its principles. The Charter of citizenship and integration values adopted in 2007 summarizes the fundamental principles of our legal system governing the collective life, both citizens and immigrants. The Charter, drawn up according to the principles of the Italian Constitution and the major European Charters and international human rights, focuses especially on those issues that multiculturalism poses to Western societies. - Corpora Multilingual - Provided by Flavia Vecchione. - MINISTERO DELL’INTERNO website
## Citation
```
CHARTER OF VALUES OF CITIZENSHIP AND INTEGRATION (Processed) (2018, October 04). Version 2.0. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/19058
``` |
FrancophonIA/Cyprus_Europe | FrancophonIA | "2024-11-21T15:41:07Z" | 5 | 0 | [
"task_categories:translation",
"language:en",
"language:de",
"language:el",
"language:fr",
"region:us"
] | [
"translation"
] | "2024-11-21T15:39:51Z" | ---
language:
- en
- de
- el
- fr
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/18916
## Description
It contains 28 txt files (7 in English, 7 in French, 7 in German and 7 in Greek), parts of the publication "Cyprus has always been Europe 2017" of the Press and Information Office of Cyprus.
## Citation
```
PIO Publication "Cyprus has always been Europe 2017" (2018, December 18). Version 1.0. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/19081
``` |
LLAAMM/text2image1m | LLAAMM | "2024-11-23T05:26:03Z" | 5 | 0 | [
"task_categories:text-to-image",
"language:en",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-to-image"
] | "2024-11-21T18:35:34Z" | ---
language:
- en
size_categories:
- 100K<n<1M
task_categories:
- text-to-image
dataset_info:
features:
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 7805188493.0
num_examples: 1000000
download_size: 7719745028
dataset_size: 7805188493.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
StudMSacc/Ultrachat-200k-fixed | StudMSacc | "2024-11-22T07:46:45Z" | 5 | 0 | [
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T07:41:25Z" | ---
license: apache-2.0
---
|
Thermostatic/Biblia-Antiguo-Testamento-Nahuatl-Huasteca-Oriental | Thermostatic | "2024-11-23T02:18:50Z" | 5 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T02:18:36Z" | ---
license: mit
---
|