id
stringlengths 2
115
| README
stringlengths 0
977k
|
---|---|
joey234/mmlu-high_school_geography-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 67356
num_examples: 198
download_size: 43021
dataset_size: 67356
---
# Dataset Card for "mmlu-high_school_geography-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-high_school_government_and_politics-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 97526
num_examples: 193
download_size: 57388
dataset_size: 97526
---
# Dataset Card for "mmlu-high_school_government_and_politics-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-high_school_macroeconomics-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 176962
num_examples: 390
download_size: 82695
dataset_size: 176962
---
# Dataset Card for "mmlu-high_school_macroeconomics-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-high_school_mathematics-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 98682
num_examples: 270
download_size: 58437
dataset_size: 98682
---
# Dataset Card for "mmlu-high_school_mathematics-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-high_school_microeconomics-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 111886
num_examples: 238
download_size: 57599
dataset_size: 111886
---
# Dataset Card for "mmlu-high_school_microeconomics-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-high_school_physics-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 92715
num_examples: 151
download_size: 51970
dataset_size: 92715
---
# Dataset Card for "mmlu-high_school_physics-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-high_school_psychology-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 252392
num_examples: 545
download_size: 146713
dataset_size: 252392
---
# Dataset Card for "mmlu-high_school_psychology-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-high_school_statistics-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 166392
num_examples: 216
download_size: 87555
dataset_size: 166392
---
# Dataset Card for "mmlu-high_school_statistics-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-high_school_us_history-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 344070
num_examples: 204
download_size: 179838
dataset_size: 344070
---
# Dataset Card for "mmlu-high_school_us_history-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-high_school_world_history-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 428765
num_examples: 237
download_size: 231609
dataset_size: 428765
---
# Dataset Card for "mmlu-high_school_world_history-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-human_aging-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 73308
num_examples: 223
download_size: 46912
dataset_size: 73308
---
# Dataset Card for "mmlu-human_aging-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-human_sexuality-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 49813
num_examples: 131
download_size: 34784
dataset_size: 49813
---
# Dataset Card for "mmlu-human_sexuality-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-international_law-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 75599
num_examples: 121
download_size: 41715
dataset_size: 75599
---
# Dataset Card for "mmlu-international_law-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-jurisprudence-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 52047
num_examples: 108
download_size: 34812
dataset_size: 52047
---
# Dataset Card for "mmlu-jurisprudence-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-logical_fallacies-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 76383
num_examples: 163
download_size: 36493
dataset_size: 76383
---
# Dataset Card for "mmlu-logical_fallacies-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-machine_learning-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 51682
num_examples: 112
download_size: 29605
dataset_size: 51682
---
# Dataset Card for "mmlu-machine_learning-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-management-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 31802
num_examples: 103
download_size: 21913
dataset_size: 31802
---
# Dataset Card for "mmlu-management-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-marketing-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 105024
num_examples: 234
download_size: 61112
dataset_size: 105024
---
# Dataset Card for "mmlu-marketing-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-medical_genetics-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 33343
num_examples: 100
download_size: 24072
dataset_size: 33343
---
# Dataset Card for "mmlu-medical_genetics-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-miscellaneous-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 237559
num_examples: 783
download_size: 153226
dataset_size: 237559
---
# Dataset Card for "mmlu-miscellaneous-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-moral_disputes-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 163056
num_examples: 346
download_size: 91098
dataset_size: 163056
---
# Dataset Card for "mmlu-moral_disputes-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-moral_scenarios-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 526800
num_examples: 895
download_size: 108821
dataset_size: 526800
---
# Dataset Card for "mmlu-moral_scenarios-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-nutrition-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 138278
num_examples: 306
download_size: 81347
dataset_size: 138278
---
# Dataset Card for "mmlu-nutrition-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-philosophy-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 123046
num_examples: 311
download_size: 72424
dataset_size: 123046
---
# Dataset Card for "mmlu-philosophy-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-prehistory-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 137437
num_examples: 324
download_size: 82027
dataset_size: 137437
---
# Dataset Card for "mmlu-prehistory-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-professional_accounting-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 201532
num_examples: 282
download_size: 111563
dataset_size: 201532
---
# Dataset Card for "mmlu-professional_accounting-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-professional_law-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 2471871
num_examples: 1534
download_size: 1367759
dataset_size: 2471871
---
# Dataset Card for "mmlu-professional_law-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-professional_medicine-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 295436
num_examples: 272
download_size: 168635
dataset_size: 295436
---
# Dataset Card for "mmlu-professional_medicine-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-professional_psychology-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 349773
num_examples: 612
download_size: 203539
dataset_size: 349773
---
# Dataset Card for "mmlu-professional_psychology-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-public_relations-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 47692
num_examples: 110
download_size: 32701
dataset_size: 47692
---
# Dataset Card for "mmlu-public_relations-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-security_studies-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 274355
num_examples: 245
download_size: 152071
dataset_size: 274355
---
# Dataset Card for "mmlu-security_studies-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-sociology-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 95791
num_examples: 201
download_size: 61994
dataset_size: 95791
---
# Dataset Card for "mmlu-sociology-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-us_foreign_policy-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 42692
num_examples: 100
download_size: 28193
dataset_size: 42692
---
# Dataset Card for "mmlu-us_foreign_policy-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alpayariyak/MATH_Embedded_Instructor-XL | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: embedding
sequence: float64
splits:
- name: train
num_bytes: 86223883
num_examples: 12500
download_size: 67650333
dataset_size: 86223883
---
# Dataset Card for "MATH_Embedded_Instructor-XL"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-virology-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 59734
num_examples: 166
download_size: 40286
dataset_size: 59734
---
# Dataset Card for "mmlu-virology-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-world_religions-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 42150
num_examples: 171
download_size: 29064
dataset_size: 42150
---
# Dataset Card for "mmlu-world_religions-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chenai2/chenai-oushi-jiaju-controlnet-dataset | ---
dataset_info:
features:
- name: original_image
dtype: image
- name: condtioning_image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 57901218.0
num_examples: 104
download_size: 57912318
dataset_size: 57901218.0
---
# Dataset Card for "chenai-oushi-jiaju-controlnet-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
crcb/emo_is | ---
license: apache-2.0
---
|
rahulchakwate/squad-QG-dataset-shuffled | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 78708152
num_examples: 87599
download_size: 50003054
dataset_size: 78708152
---
# Dataset Card for "squad-QG-dataset-shuffled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rahulchakwate/squad-QG-dataset-original | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 78708152
num_examples: 87599
download_size: 13814066
dataset_size: 78708152
---
# Dataset Card for "squad-QG-dataset-original"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
achang/device_2 | ---
dataset_info:
features:
- name: text
dtype: string
- name: prompt
dtype: string
- name: label
dtype: string
- name: image
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 79859375
num_examples: 132
download_size: 38738
dataset_size: 79859375
---
# Dataset Card for "device_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
generative-newsai/news-unmasked | ---
dataset_info:
features:
- name: image
dtype: image
- name: section
dtype: string
- name: headline
dtype: string
- name: image_id
dtype: string
splits:
- name: train
num_bytes: 5084636867.984
num_examples: 48988
- name: test
num_bytes: 1360809852.398
num_examples: 12247
download_size: 1331950856
dataset_size: 6445446720.382
task_categories:
- image-to-text
pretty_name: NewsUnmasked
---
# Dataset Card for "news-unmasked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Korrie/Pokemon_Images | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 8631269.0
num_examples: 833
download_size: 8458136
dataset_size: 8631269.0
---
# Dataset Card for "Pokemon_Images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-abstract_algebra-rule-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 19525
num_examples: 100
download_size: 9627
dataset_size: 19525
---
# Dataset Card for "mmlu-abstract_algebra-rule-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-anatomy-rule-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 33692
num_examples: 135
download_size: 19850
dataset_size: 33692
---
# Dataset Card for "mmlu-anatomy-rule-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-astronomy-rule-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 47234
num_examples: 152
download_size: 28038
dataset_size: 47234
---
# Dataset Card for "mmlu-astronomy-rule-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-business_ethics-rule-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 33573
num_examples: 100
download_size: 20450
dataset_size: 33573
---
# Dataset Card for "mmlu-business_ethics-rule-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-clinical_knowledge-rule-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 63511
num_examples: 265
download_size: 40149
dataset_size: 63511
---
# Dataset Card for "mmlu-clinical_knowledge-rule-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-college_biology-rule-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 49310
num_examples: 144
download_size: 31547
dataset_size: 49310
---
# Dataset Card for "mmlu-college_biology-rule-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-college_chemistry-rule-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 25114
num_examples: 100
download_size: 17279
dataset_size: 25114
---
# Dataset Card for "mmlu-college_chemistry-rule-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-college_computer_science-rule-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 41303
num_examples: 100
download_size: 26956
dataset_size: 41303
---
# Dataset Card for "mmlu-college_computer_science-rule-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-college_mathematics-rule-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 24163
num_examples: 100
download_size: 15864
dataset_size: 24163
---
# Dataset Card for "mmlu-college_mathematics-rule-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-college_medicine-rule-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 83082
num_examples: 173
download_size: 42212
dataset_size: 83082
---
# Dataset Card for "mmlu-college_medicine-rule-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-college_physics-rule-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 30630
num_examples: 102
download_size: 18326
dataset_size: 30630
---
# Dataset Card for "mmlu-college_physics-rule-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-computer_security-rule-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 27322
num_examples: 100
download_size: 18603
dataset_size: 27322
---
# Dataset Card for "mmlu-computer_security-rule-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-conceptual_physics-rule-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 41680
num_examples: 235
download_size: 24838
dataset_size: 41680
---
# Dataset Card for "mmlu-conceptual_physics-rule-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-econometrics-rule-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 46690
num_examples: 114
download_size: 24025
dataset_size: 46690
---
# Dataset Card for "mmlu-econometrics-rule-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
demoww/database | ---
license: openrail
---
|
cr7Por/my_controlnet | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_crop
dtype: image
- name: image_caption
dtype: string
splits:
- name: train
num_bytes: 135354742.0
num_examples: 435
download_size: 135278720
dataset_size: 135354742.0
---
# Dataset Card for "my_controlnet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
desertfox/hsereg2022 | ---
license: afl-3.0
---
|
ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered | This dataset is the WizardLM dataset victor123/evol_instruct_70k, removing instances of blatant alignment.
54974 instructions remain.
inspired by https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered
All credit to anon8231489123 for the cleanup script that I adapted to wizardlm_clean.py
---
license: apache-2.0
language:
- en
pretty_name: wizardlm-unfiltered
--- |
fouadbakour/hiverse_analytics_faq_identity_sharegpt_AlpacaStyle-v2 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 177438089
num_examples: 91133
download_size: 85565573
dataset_size: 177438089
---
# Dataset Card for "hiverse_analytics_faq_identity_sharegpt_AlpacaStyle-v2"
This data set contains the below:
Hiverse FAQ and Analytics dataset:
```json
[
{
"input": "",
"instruction": "What is Hiverse?",
"output": "Hiverse is a universal mixture of communication, collaboration and social features to help you expose your culture and keep your hives connected. Companies from all sizes can use Hiverse to ensure an effective, productive work cycle."
},
{
"input": "",
"instruction": "what are hiverse features?",
"output": "Hiverse provides multiple feature such social wall, organizer to organize your daily tasks and keep them on track, ability to sync your events with google and outlook calendars, HR system any many more.."
},
{
"input": "",
"instruction": "What features does Hiverse offer?",
"output": "Hiverse offers a variety of features including a social wall, an organizer to keep your daily tasks on track, the ability to sync events with Google and Outlook calendars, an HR system, and many more."
},
{
"input": "",
"instruction": "What can I expect from Hiverse's features?",
"output": "Hiverse offers a diverse range of features, including a social wall, task organizer, calendar syncing with Google and Outlook, HR system and more, to help you stay connected and productive."
},
{
"input": "",
"instruction": "How long did it take Bassel Metlej to test the front end of Eurisko Platform Frontend with React Native?",
"output": "Bassel Metlej spent 3 hours testing the front end of Eurisko Platform Frontend with React Native, ensuring that all components were working properly. He also debugged it to ensure there was no issue in usability tests and foresight into potential issues before release date Mar 29, 2023 and completed them successfully so as not to delay any further development or releases on this project which would have been delayed due to lack of time available at launch day."
},
{
"input": "",
"instruction": "How long did Bassel Metlej spend testing the front end of Eurisko Platform Frontend with React Native?",
"output": "Bassel Metlej spent 3 hours testing the front end of Eurisko Platform Frontend with React Native, ensuring that all components were working properly. He also debugged it to ensure there was no issue in usability tests and foresight into potential issues before release date Mar 29, 2023 so as not to delay any further development or releases on this project which would have been delayed due to lack of time available at launch day; thus preventing delays caused by limited resources during implementation phase for other projects related in line to his research interests which had already begun earlier than released dates of March 29th-2022 and June 30"
},
{
"input": "",
"instruction": "How long did Abdulrahman Qabbout spend working on Wizzo-Retainer?",
"output": "Abdulrahman Qabbout worked on a project called Wizzo-Retainer, which was due in March 29th 2023. He spent an average of 1 hour per day working with iOS and had to complete the task within that time frame as it required him to use his skillset for this type of work."
},
{
"input": "",
"instruction": "How long did it take Abdulrahman Qabbout to complete Wizzo-Retainer?",
"output": "Abdulrahman Qabbout worked on a project called Wizzo-Retainer, which was due in March 29th 2023. He spent an average of 1 hour per day working with iOS and had to complete the task within that time frame as it required him to use his skillset for this type of work in order not only meet but exceed deadlines set by Apple Inc, such as their new iPhone 11. The goal is simple: To provide users accessing services from different providers at once through one app or website without having them install any additional software before they are ready to be used online with minimal effort!"
},
{
"input": "",
"instruction": "What was the deadline for Wizzo-Retainer?",
"output": "Abdulrahman Qabbout worked on a project called Wizzo-Retainer, which was due in March 29th 2023. He spent an average of 1 hour per day working with iOS and had to complete the task within that time frame as it required him to use his skillset for this type of work so they could meet but exceed deadlines set by Apple Inc such as their new iPhone 11. The goal is simple: To provide users accessing services from different providers at once through one app or website without having them install any additional software before They are ready to be used online easily!"
},
```
Hiverse Identity Data:
```json
{
"instruction": "Who are you?",
"input": "",
"output": "I am HiveBrain, a language model trained by researchers from Eurisko Mobility."
},
{
"instruction": "Who are you?",
"input": "",
"output": "My name is HiveBrain, and I'm a language model developed by Eurisko Mobility."
},
{
"instruction": "Who are you?",
"input": "",
"output": "You can call me HiveBrain, and I was trained by Eurisko Mobility researchers as a language model."
},
{
"instruction": "Who are you?",
"input": "",
"output": "As a language model, I go by the name HiveBrain and was trained by researchers from Eurisko Mobility."
},
{
"instruction": "Who are you?",
"input": "",
"output": "I'm a language model called HiveBrain, and I was trained by Eurisko Mobility researchers."
},
```
ShareGPT vicuna, we did additional cleaning by removing none-english values and empty data:
https://huggingface.co/datasets/jeffwan/sharegpt_vicuna/blob/main/sharegpt_20230401_clean_lang_split.json
|
metaeval/x-fact | ---
dataset_info:
features:
- name: evidence
dtype: string
- name: claim
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 44250130
num_examples: 30162
- name: dev
num_bytes: 3501110
num_examples: 2393
- name: test
num_bytes: 5214557
num_examples: 3597
download_size: 29968985
dataset_size: 52965797
---
# Dataset Card for "x-fact"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fouadbakour/hiverse_faq_analytics_identity_gpt4_shareGPT-v1 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 218743304
num_examples: 146245
download_size: 109900041
dataset_size: 218743304
---
# Dataset Card for "hiverse_faq_analytics_identity_gpt4_shareGPT-v1"
This data set contains the below:
Hiverse FAQ and Analytics dataset:
```json
[
{
"input": "",
"instruction": "What is Hiverse?",
"output": "Hiverse is a universal mixture of communication, collaboration and social features to help you expose your culture and keep your hives connected. Companies from all sizes can use Hiverse to ensure an effective, productive work cycle."
},
{
"input": "",
"instruction": "what are hiverse features?",
"output": "Hiverse provides multiple feature such social wall, organizer to organize your daily tasks and keep them on track, ability to sync your events with google and outlook calendars, HR system any many more.."
},
{
"input": "",
"instruction": "What features does Hiverse offer?",
"output": "Hiverse offers a variety of features including a social wall, an organizer to keep your daily tasks on track, the ability to sync events with Google and Outlook calendars, an HR system, and many more."
},
{
"input": "",
"instruction": "What can I expect from Hiverse's features?",
"output": "Hiverse offers a diverse range of features, including a social wall, task organizer, calendar syncing with Google and Outlook, HR system and more, to help you stay connected and productive."
},
{
"input": "",
"instruction": "How long did it take Bassel Metlej to test the front end of Eurisko Platform Frontend with React Native?",
"output": "Bassel Metlej spent 3 hours testing the front end of Eurisko Platform Frontend with React Native, ensuring that all components were working properly. He also debugged it to ensure there was no issue in usability tests and foresight into potential issues before release date Mar 29, 2023 and completed them successfully so as not to delay any further development or releases on this project which would have been delayed due to lack of time available at launch day."
},
{
"input": "",
"instruction": "How long did Bassel Metlej spend testing the front end of Eurisko Platform Frontend with React Native?",
"output": "Bassel Metlej spent 3 hours testing the front end of Eurisko Platform Frontend with React Native, ensuring that all components were working properly. He also debugged it to ensure there was no issue in usability tests and foresight into potential issues before release date Mar 29, 2023 so as not to delay any further development or releases on this project which would have been delayed due to lack of time available at launch day; thus preventing delays caused by limited resources during implementation phase for other projects related in line to his research interests which had already begun earlier than released dates of March 29th-2022 and June 30"
},
{
"input": "",
"instruction": "How long did Abdulrahman Qabbout spend working on Wizzo-Retainer?",
"output": "Abdulrahman Qabbout worked on a project called Wizzo-Retainer, which was due in March 29th 2023. He spent an average of 1 hour per day working with iOS and had to complete the task within that time frame as it required him to use his skillset for this type of work."
},
{
"input": "",
"instruction": "How long did it take Abdulrahman Qabbout to complete Wizzo-Retainer?",
"output": "Abdulrahman Qabbout worked on a project called Wizzo-Retainer, which was due in March 29th 2023. He spent an average of 1 hour per day working with iOS and had to complete the task within that time frame as it required him to use his skillset for this type of work in order not only meet but exceed deadlines set by Apple Inc, such as their new iPhone 11. The goal is simple: To provide users accessing services from different providers at once through one app or website without having them install any additional software before they are ready to be used online with minimal effort!"
},
{
"input": "",
"instruction": "What was the deadline for Wizzo-Retainer?",
"output": "Abdulrahman Qabbout worked on a project called Wizzo-Retainer, which was due in March 29th 2023. He spent an average of 1 hour per day working with iOS and had to complete the task within that time frame as it required him to use his skillset for this type of work so they could meet but exceed deadlines set by Apple Inc such as their new iPhone 11. The goal is simple: To provide users accessing services from different providers at once through one app or website without having them install any additional software before They are ready to be used online easily!"
},
```
Hiverse Identity Data:
```json
{
"instruction": "Who are you?",
"input": "",
"output": "I am HiveBrain, a language model trained by researchers from Eurisko Mobility."
},
{
"instruction": "Who are you?",
"input": "",
"output": "My name is HiveBrain, and I'm a language model developed by Eurisko Mobility."
},
{
"instruction": "Who are you?",
"input": "",
"output": "You can call me HiveBrain, and I was trained by Eurisko Mobility researchers as a language model."
},
{
"instruction": "Who are you?",
"input": "",
"output": "As a language model, I go by the name HiveBrain and was trained by researchers from Eurisko Mobility."
},
{
"instruction": "Who are you?",
"input": "",
"output": "I'm a language model called HiveBrain, and I was trained by Eurisko Mobility researchers."
},
```
alpaca_gpt4_data data:
https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM/blob/main/data/alpaca_gpt4_data.json
ShareGPT vicuna, we did additional cleaning by removing none-english values and empty data:
https://huggingface.co/datasets/jeffwan/sharegpt_vicuna/blob/main/sharegpt_20230401_clean_lang_split.json
|
FourthBrainGenAI/AI-Superstar-Dataset | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 44747
num_examples: 148
download_size: 23888
dataset_size: 44747
---
# Dataset Card for "AI-Superstar-Dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bjoernp/tagesschau-2018-2023 | ---
dataset_info:
features:
- name: date
dtype: string
- name: headline
dtype: string
- name: short_headline
dtype: string
- name: short_text
dtype: string
- name: article
dtype: string
- name: link
dtype: string
splits:
- name: train
num_bytes: 107545823
num_examples: 21847
download_size: 63956047
dataset_size: 107545823
language:
- de
size_categories:
- 10K<n<100K
---
# Tagesschau Archive Article Dataset
A scrape of Tagesschau.de articles from 01.01.2018 to 26.04.2023. Find all source code in [github.com/bjoernpl/tagesschau](https://github.com/bjoernpl/tagesschau).
## Dataset Information
CSV structure:
| Field | Description |
| --- | --- |
| `date` | Date of the article |
| `headline` | Title of the article |
| `short_headline` | A short headline / Context |
| `short_text` | A brief summary of the article |
| `article` | The full text of the article |
| `href` | The href of the article on tagesschau.de |
Size:
The final dataset (2018-today) contains 225202 articles from 1942 days. Of these articles only
21848 are unique (Tagesschau often keeps articles in circulation for ~1 month). The total download
size is ~65MB.
Cleaning:
- Duplicate articles are removed
- Articles with empty text are removed
- Articles with empty short_texts are removed
- Articles, headlines and short_headlines are stripped of leading and trailing whitespace
More details in [`clean.py`](https://github.com/bjoernpl/tagesschau/blob/main/clean.py). |
MohammedNasri/cv11_ar_mix | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 36960805056
num_examples: 38481
- name: test
num_bytes: 10027431536
num_examples: 10440
download_size: 4276265822
dataset_size: 46988236592
---
# Dataset Card for "cv11_ar_mix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
attackparent/Rayla-TheDragonPrince | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Rayla - Moon Elf from "The Dragon Prince" Netflix series
## Dataset Description
54 hand-picked images at 1024x1024 resolution
Captions are generated by wd14-tagger and pruned for any inaccuracies
3 Token trigger word "raylaDP" is added as the first caption on each file |
rub1sc0/beijing-views | ---
license: gpl-3.0
tags:
- Beijing
- Weather
- Air quality
- photos
---
pictures of views from an office window in Beijing from the same spot, same angle at noon of different dates
address of the office: TECHART PLAZA, NO.30 XUEYUAN ROAD floor 8
potential application:
learn air quality and weather from picture |
james-burton/product_sentiment_machine_hack | ---
dataset_info:
features:
- name: Product_Description
dtype: string
- name: Product_Type
dtype: int64
- name: Sentiment
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
splits:
- name: train
num_bytes: 539691.8956982911
num_examples: 4327
- name: validation
num_bytes: 95291.1043017089
num_examples: 764
- name: test
num_bytes: 159788
num_examples: 1273
download_size: 442311
dataset_size: 794771.0
---
# Dataset Card for "product_sentiment_machine_hack"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anjunhu/CuPL_DaVinci_captioned_CUB2002011_train | ---
dataset_info:
features:
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 166122794.75
num_examples: 5994
download_size: 165787380
dataset_size: 166122794.75
---
# Dataset Card for "CuPL_DaVinci_captioned_CUB2002011_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
friedrichor/samsum_processed | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 9301373
num_examples: 14732
- name: validation
num_bytes: 506559
num_examples: 818
- name: test
num_bytes: 524612
num_examples: 819
download_size: 6621365
dataset_size: 10332544
---
# Dataset Card for "samsum_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wellmecollagenrefreshusa/WellMe-Collagen-Refresh | <h1 style="text-align: left;">WellMe Collagen Refresh</h1>
<p style="text-align: left;"><span style="font-size: medium;"><strong>• Product Name - <a href="https://sites.google.com/view/wellme-collagen-refresh-price/">WellMe Collagen Refresh</a><br /></strong></span></p>
<p style="text-align: left;"><span style="font-size: medium;"><strong>• Side Effects - No Major Side Effects (100% Natural)</strong></span></p>
<p style="text-align: left;"><span style="font-size: medium;"><strong>• Main Benefits - Relieve Joint Pain, Prevent Bone Loss, Skin<br /></strong></span></p>
<p style="text-align: left;"><span style="font-size: medium;"><strong>• Category - Collagen Support<br /></strong></span></p>
<p style="text-align: left;"><span style="font-size: medium;"><strong>• Results - In 2 Weeks</strong></span></p>
<p style="text-align: left;"><span style="font-size: medium;"><strong>• Customer Reviews - ★★★★✰ 4.8/5</strong></span></p>
<p style="text-align: left;"><span style="font-size: medium;"><strong>• Price - Visit <a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh">Official Website</a></strong></span></p>
<p style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><span style="font-size: medium;"><span style="font-family: georgia;"><strong><span style="background-color: #ffe599;"><span style="color: red;">Shipping On All Over World</span></span> <span style="background-color: #ffe599;"><span style="color: red;">Secured Checkout</span></span></strong></span></span></a></p>
<p style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><span style="font-size: medium;"><span style="font-family: georgia;"><strong><span style="background-color: #ffe599;"><span style="color: red;">[Special Discount- 50% Off] WellMe Collagen Refresh – Get Your Best Discount Online Hurry!!</span></span></strong></span></span></a> </p>
<div class="separator" style="clear: both; text-align: center;"><strong><a style="margin-left: 1em; margin-right: 1em;" href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWtJ-GQQvwAuSZJK6m4DvpH2rfl9bNwW-w4OkBhUgi1_9ca4GY4rSa4hKJKJdzuUiehf7BO5JlMXlNbr2D2caaVH0qxNI_4l3SIXC1AgRdDGnjqssxGCd7QrMLpY6JTzJKJF_doSuCf--dk4v75R921XC66_TFmuhBiQ7uSW_RZhlUkA2yU9TbfvgIJg/w640-h284/Collagen%20Refresh%201.png" alt="" width="640" height="284" border="0" data-original-height="778" data-original-width="1750" /></a></strong></div>
<p>WellMe Collagen Refresh is a dietary supplement to support the health of joints, in women. According to the manufacturer, it boosts the production of refreshed collagen in the body which is responsible for supporting the health of joints. They state that WellMe Collagen Refresh is a unique approach to joint and collagen as it does not pump collagen from outside to your body, instead, it promotes the production of refreshed collagen from within the body.</p>
<h2 style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><span style="font-family: georgia;"><span style="background-color: #b6d7a8;"><strong>Don’t Wait Any Longer, To Get <span style="color: red;">Discount Upto 75%</span></strong></span></span></a></h2>
<h2 style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><span style="font-family: georgia;"><span style="background-color: #b6d7a8;"><strong>Order Now!!</strong></span></span> <span style="font-family: georgia;"><span style="background-color: #b6d7a8;"><strong><span style="color: red;">WellMe Collagen Refresh</span></strong></span></span></a></h2>
<h2 style="text-align: left;"><strong>Purpose Of The <a href="https://wellmecollagenrefresh.jimdosite.com/">WellMe Collagen Refresh</a> Review</strong></h2>
<p><a href="https://gocrowdera.com/US/self/wellme-collagen-refresh/wellmecollagen-50946">WellMe Collagen Refresh</a>’s multi-collagen blend has amassed a lot of positive reviews for itself with many women opening up about how it has brought unexpected results. Many reviews of people who have already used the product are spread all over the internet explaining how the WellMe Collagen Refresh supplement has not only helped in improving their joints but also left them feeling refreshed and given them rejuvenated skin.</p>
<p>However, these testaments of its users that are written on the internet are not evidence enough to suggest the genuinity and effectiveness of the WellMe Collagen Refresh formula. It requires a thorough examination of the components that make up the supplement and a detailed aspect of its various aspects along with an expert opinion, to come to a final verdict on the product. If you are someone who has read these reviews but is still skeptical if the product is worth its price, then reading this WellMe Collagen Refresh review might answer your concerns.</p>
<h3 style="text-align: left;"><span style="font-family: georgia;"><strong>Must See : <span style="background-color: #ffe599;"><span style="color: red;"><a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh">Visit the Official Site WellMe Collagen Refresh [Up to 45% Discount Available Here]</a></span></span></strong></span></h3>
<h2 style="text-align: left;"><strong>Ingredients Of <a href="https://wellmecollagenrefresh.company.site/">WellMe Collagen Refresh</a><br /></strong></h2>
<h3 style="text-align: left;"><strong>The ingredients used in the making of <a href="https://www.podcasts.com/wellme-collagen-refresh/episode/wellme-collagen-refresh-is-a-unique-approach-to-women-supporting-joint-health-skin-and-daily-energyspam-or-legit">WellMe Collagen Refresh</a> and its properties are mentioned below.:</strong></h3>
<h3 style="text-align: left;"><strong>Copper</strong></h3>
<p>Copper is a mineral found in the body that plays an essential role in the production of collagen. It does that by producing prolyl hydroxylase, which is an enzyme that promotes the production of collagen. Copper can make red blood cells and support the immune system. It also helps in the formation of elastin to maintain the elasticity of the skin. It has in it an enzyme called lysyl oxidase that increases the strength of joints, the firmness of the skin, and the connectivity of the tissues. This WellMe Collagen Refresh ingredient can protect the body from cell damage and helps absorb iron better into the body. It also plays an important role in the production of melanin, which is a pigment that contributes to protecting the skin from cell damage and maintaining the health of collagen fibers.</p>
<h3 style="text-align: left;"><strong>Vitamin C</strong></h3>
<p>Vitamin C is an antioxidant and can protect the body against free radicals. It can boost immunity and help with iron deficiency. It produces a compound called hydroxyproline which plays a role in the amount of Refreshed Collagen in the body. Vitamin C may also help in lowering the risk of heart disease and in managing high blood pressure. Some studies show that it has a protective effect on thinking and memory.</p>
<h3 style="text-align: left;"><strong>Vitamin E</strong></h3>
<p>Vitamin E is a compound that can help you support your immune system. It can protect you from oxidative stress and may help in reducing the risk of heart disease. Vitamin E can help strengthen collagen fibers, hence preventing the risk of breakdown. Some studies suggest that this WellMe Collagen Refresh ingredient may help in improving skin conditions like eczema. It may also improve the functions of the lung and promote better cognitive functions.</p>
<h3 style="text-align: left;"><strong>Polypodium Leucotomos</strong></h3>
<p>Polypodium is a fern found in the tropical regions of Central and South America. It has anti-inflammatory properties and can help improve skin conditions caused y inflammation. It can improve symptoms of eczema and prevent skin from sun damage. It is a rich anti-oxidant and can help the body fight free radicals. It can help with many skin issues and may help in preventing cell damage in the body caused by sun damage.</p>
<h3 style="text-align: left;"><strong>Hydrolyzed Bovine Collagen</strong></h3>
<p>This is a Bovive Collagen Peptide of Type I and is clinically proven to boost collagen levels. It can promote bone density and healthy inflammatory response in joints. This WellMe Collagen Refresh component can promote nail growth and the overall health of nails. It can reduce the appearance of fine lines and cellulite.</p>
<h3 style="text-align: left;"><strong>Eggshell Collagen Peptides</strong></h3>
<p>Eggshell Collagen Peptide is a good source of Type III Collagen peptide and can prevent skin discoloration and promote even-toned skin. It can improve the elasticity of the skin making it retain its natural shape. It addresses the concerns of joints and connective tissues.</p>
<h3 style="text-align: left;"><strong>Marine Collagen Peptides</strong></h3>
<p>It is a Type I collagen peptide with low molecular weight which makes it easy to absorb by the body. It is an effective ingredient in supporting joints and skin. This WellMe Collagen Refresh ingredient can leave the skin feeling more hydrated and make the skin smoother. It may help in preventing signs of aging.</p>
<h3 style="text-align: left;"><strong>Hyaluronic Acid</strong></h3>
<p>Some studies have shown that hyaluronic acid may help in supporting the body against occasional joint pain. It hydrates the skin and boosts the effectiveness of collagen in the body. It can improve wound healing and acid reflux. It can preserve the strength of bones and reduce bladder pain.</p>
<p>Though animal collagen may not support the production of collagen within the body, it can give the body a temporary boost in collagen levels. A proper combination of Type I and Type III collagen peptides can help with this.</p>
<p><span style="font-size: medium;"><span style="background-color: #fff2cc;"><span style="color: red;"><span style="font-family: georgia;"><strong>To Learn More about WellMe Collagen Refresh Ingredients in Detail, Click Here to Head to Its Official Website</strong></span></span></span></span> </p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzM4015Xxe9J3THqDorWUZdTEL1fahWOxnEAhIgobQYEwN7Ih4VC5A_9GFDmQLPBsvzdl9wp1tjV45QsevDQpi-4Vrd35VkiH2qU4_GYF71o8J632fUwRtL8t5v7pzFHFA3FV1atUNszum2djhCO0xFAySs_WSezu6fUp0iV3L8qBfO8ZcgXZMoD7szg/w640-h336/Collagen%20Refresh%207.jpg" alt="" width="640" height="336" border="0" data-original-height="737" data-original-width="1400" /></a></div>
<h2 style="text-align: left;"><strong>Evaluation Of <a href="https://wellme-collagen-refresh.webflow.io/">WellMe Collagen Refresh</a> From A User’s Perspective</strong></h2>
<p><a href="https://wellme-collagen-refresh-report.clubeo.com/page/wellme-collagen-refresh-is-a-unique-approach-to-women-supporting-joint-health-skin-and-daily-energy-spam-or-legit.html">WellMe Collagen Refresh</a> is a supplement that works to improve joints while providing other benefits to the different parts of the body. Many women experience discomfort in their joints as they start aging.</p>
<p>Collagen is a compound in the body that can help with this, but it is only effective in supporting the joints if it is produced within the body. The collagen that supports the health of joints is called extracellular triple helical collagen, or in simple terms refreshed collagen. It not only supports the joints but also improves the skin and boosts energy.</p>
<p>Collagen from outside the body is not enough to serve this purpose. WellMe Collagen Refresh powder helps with this, as it helps to increase collagen production from within the body, supporting healthier joints.</p>
<p>It can also promote healthy inflammatory responses, improve balance and coordination, improve flexibility, boost energy level, and benefits the skin. WellMe Collagen Refresh addresses the issues of joints to eliminate discomfort in them and relieve you from pain. It can also help with skin issues leaving it healthier than before.</p>
<h3 style="text-align: left;"><span style="font-family: georgia;"><strong>IMPORTANT: <a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><span style="background-color: #ffe599;"><span style="color: red;">Shocking Truth About WellMe Collagen Refresh – This May Change Your Mind!</span></span></a></strong></span></h3>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEire3TGzcemsroxHYF74c-InXnE0LW8NYk8b3jyLBsD3tFfaZeX-fFrNw_uVMyOgVkwj27y_1MkiTwaDcmEcc6EAAqedN06xMJbFdwrgL82podX2kywtY9x3I5XFM1cDHrTl640i21tUnVJPop0apk6Meqf3jn_BR877lv3rEOG6EaUbyvGAN0qsrmEXw/s1750/Collagen%20Refresh%202.png"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEire3TGzcemsroxHYF74c-InXnE0LW8NYk8b3jyLBsD3tFfaZeX-fFrNw_uVMyOgVkwj27y_1MkiTwaDcmEcc6EAAqedN06xMJbFdwrgL82podX2kywtY9x3I5XFM1cDHrTl640i21tUnVJPop0apk6Meqf3jn_BR877lv3rEOG6EaUbyvGAN0qsrmEXw/w640-h276/Collagen%20Refresh%202.png" alt="" width="640" height="276" border="0" data-original-height="754" data-original-width="1750" /></a></div>
<h2 style="text-align: left;"><strong>How To Use <a href="https://jemi.so/wellme-collagen-refresh">WellMe Collagen Refresh</a> Powder?</strong></h2>
<p>According to the official website, one scoop of <a href="https://www.yepdesk.com/wellme-collagen-refresh-is-a-unique-approach-to-women-supporting-joint-health-skin-and-daily-energy">WellMe Collagen Refresh</a> is recommended to be taken a day. But if you want faster and stronger results, you can take two scoops of the supplement a day. As the supplement uses natural ingredients, there is no risk in taking more than a scoop. There are no risks of side effects.</p>
<p>You can mix a scoop of the supplement in a glass of water to take it. WellMe Collagen Refresh powder is sweetened and berry lemon-flavored, therefore it can be taken without much difficulty.</p>
<p>If tap water is the only available option in your household, you can try dissolving a scoop of the WellMe Collagen Refresh skin and joint supplement in lukewarm water and then add ice cubes to cool it down. This would help in dissolving the minerals better. But most users have reported that the supplement dissolves easily in water.</p>
<h3 style="text-align: left;"><span style="font-family: georgia;"><strong>Must See: <a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><span style="background-color: #ffe599;"><span style="color: red;">Due To High Demand WellMe Collagen Refresh, We Accept Limited Orders, Hurry Now</span></span></a></strong></span></h3>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMqWfQk1-6SoAuv2KVkhON6Fb4OEjngVMWjEW5Cge4K_bHofWmsB11njRU40ShAC2cTzHvRU0UU1U45dOUT0qLEJ2B_VhhzXjZ3FSUt7H3kSasx7KcT1Uk8PRNSL_BydWgp-XuzffBtGlubRRpnJlGAgw5M8MQKwdj3MiC2L_HIztiMRns5DBQ167ImQ/w640-h250/Collagen%20Refresh%203.jpg" alt="" width="640" height="250" border="0" data-original-height="625" data-original-width="1597" /></a></div>
<h2 style="text-align: left;"><strong>Is <a href="https://wellme-collagen-refresh.square.site/">WellMe Collagen Refresh</a> Backed By Scientific Evidence?</strong></h2>
<p>Collagen is a compound that is essential in our body. As we grow older the collagen levels in our body decrease. As a result, the skin and joints get affected. The skin loses its firmness and starts to sag, whereas the joints lose their elasticity and flexibility. Collagen is what makes up the cartilage in the joints, responsible for its smooth moveability.</p>
<p>Some studies suggest that increased collagen production or an attempt to replace the collagen that has been lost from the body due to aging might help you to support your joint health while improving the skin and restoring its naturalness.</p>
<h2 style="text-align: left;"><strong>Medical Benefits Of <a href="https://wellmecollagenrefresh.hp.peraichi.com/wellmecollagenrefresh">WellMe Collagen Refresh</a><br /></strong></h2>
<h3 style="text-align: left;"><strong>The benefits of <a href="https://wellme-collagen-refresh-report.clubeo.com/calendar/2023/04/26/wellme-collagen-refresh-improved-joint-mobility-reduced-stiffness-strengthened-hair-nails-real-or-hoax">WellMe Collagen Refresh</a> are given below:</strong></h3>
<p><strong>Wide range of motion:- </strong>WellMe Collagen Refresh supplement increases the production of collagen which in turn leads to better and stronger cartilages. With stronger cartilages, the users get to enjoy a wider range of motion.</p>
<p><strong>Healthy inflammatory response:- </strong>Ingredients like hydrolyzed bovine collagen improve the inflammatory response in joints. This will help to prevent the joints from damage and injuries.</p>
<p><strong>Improved flexibility:- </strong>WellMe Collagen Refresh formula not only supports healthy joints but also relieves you from joint discomfort and pain. This in turn leads to better flexibility and mobility</p>
<p><strong>Better skin:- </strong>Collagen is a compound in the body that plays an important role in the health of the skin. By increasing the production of collagen in the body you can get back the elasticity and firmness of your skin, leaving you looking younger.</p>
<h3 style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><span style="background-color: #ffe599;"><span style="color: red;"><span style="font-family: georgia;"><strong>(ACT NOW & SAVE) Click Here to Buy WellMe Collagen Refresh From The Official Website</strong></span></span></span></a></h3>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAkohnfheQLKVUOEC39vjzgAgCRzbHluAqqFIm-NkbtQVUbrM9W6m4iri2M8b53jZZzYF1BkYvwQck0f_Y3U-LoSZonJ9hlmvDQyc0y8vCjUqpvxOVerdVSoJId-Zv80KRdl4zgVcnI0TUnuff1hmrLfM-JX1qjMHNkruNFJQ72fFyLAugs01GBBFjCQ/w640-h280/Collagen%20Refresh%204.jpg" alt="" width="640" height="280" border="0" data-original-height="653" data-original-width="1492" /></a></div>
<h2 style="text-align: left;"><strong>Drawbacks Of <a href="https://lookerstudio.google.com/reporting/a19f4991-2001-4956-85db-fc2b72a15269">WellMe Collagen Refresh</a><br /></strong></h2>
<p><a href="https://www.podcasts.com/wellme-collagen-refresh">WellMe Collagen Refresh</a> powder might not be suitable for individuals under the age of 18.</p>
<p>It is available only on the official website.</p>
<p>Not following the recommended usage might not bring desired results.</p>
<p>Nursing mothers, pregnant women, people taking other medication, and those having other medical conditions might have to consult a healthcare professional or physician before taking the supplement.</p>
<p>Due to its demand in the market, there are many fake products in the name of WellMe Collagen Refresh available in the market.</p>
<h2 style="text-align: left;"><strong>Potential Side Effects Of <a href="https://groups.google.com/g/wellme-collagen-refresh-price/c/71vsLkt9G_U">WellMe Collagen Refresh</a><br /></strong></h2>
<p>To ascertain if there are any possible side effects to this supplement, the different aspects of the product like its ingredients and manufacturing must be thoroughly examined. WellMe Collagen Refresh is a dietary supplement that uses natural ingredients for its production. It is free of genetically modified organisms (GMOs) and is BPA-free (Bisphenol-A).</p>
<p>It is manufactured in the USA in an FDA-approved and GMP-certified lab facility. The production of the WellMe Collagen Refresh supplement is done following all the standards of manufacturing in hygienic and precise conditions.</p>
<p>The <a href="https://colab.research.google.com/drive/19ayrdxoPZvyjZ24Rirb3bN-Dj7ACcZkQ?usp=sharing">WellMe Collagen Refresh</a> powder also stays clear of a lot of ingredients that can trigger allergies in many people, you can go ahead and use it without having concerns about allergy risks. The ingredients used in the making of the supplement are also safe.</p>
<p>Taking these factors into consideration, WellMe Collagen Refresh is not expected to have any side effects and can be considered a safe supplement.</p>
<h3 style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><span style="background-color: #d9ead3;"><span style="color: red;"><span style="font-family: georgia;"><strong>Everything about the effectiveness of WellMe Collagen Refresh discussed. Detailed This Product Is Here</strong></span></span></span></a></h3>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCsGFmoH1Iv6ZfxuOsZXoMbXFZtUNijp_4vH_3IZlnz6JSOhOqWqAflGDZmaNPM4mSpB8QobftR3Iem9ykOCyqkRH47_Z86e9UByTQDIzCCvDUdRbvqiLPLeXDzAT4WYM5j3AHUdNTqHQPLjJFY-JfH-H8hofa1jmVeyD1iwFaSmIxmUnQvoiatwXI5A/w640-h320/Collagen%20Refresh%206.jpg" alt="" width="640" height="320" border="0" data-original-height="732" data-original-width="1463" /></a></div>
<h2 style="text-align: left;"><strong>Where To Buy <a href="https://organizador.sympla.com.br/wellmecollagenrefreshjar">WellMe Collagen Refresh</a>?</strong></h2>
<p><a href="https://www.deviantart.com/wellmecollagenrefres/art/Collagen-Refresh-WellMe-Doctor-Formulated-959915250">WellMe Collagen Refresh</a> is only available on the official website. To place your order, go to the official website and click on the ‘Order now’ button.</p>
<p>You will reach the part of the page where you can see different buying options. You can choose one of your likings and click on the ‘Add to cart’ button. You will be redirected to a page where you can fill in the details and make the payment to place the order.</p>
<p><strong>On the <a href="https://collagenrefresh.hashnode.dev/wellme-collagen-refresh-is-a-unique-approach-to-women-supporting-joint-health-skin-and-daily-energyspam-or-legit">WellMe Collagen Refresh</a> website, you can also find the following deals:</strong></p>
<p><strong>Basic -</strong> 1 JAR Supply of WellMe Collagen Refresh USD 59/JAR + FREE SHIPPING<span style="color: red;"> <span style="color: #38761d;">(180-Day 100% Money-Back Guarantee)</span><br /></span></p>
<p><strong>Popular Pack -</strong> Buy 3 JAR Supply of WellMe Collagen Refresh USD 49/JAR + FREE SHIPPING <span style="color: red;"><span style="color: #38761d;">(180-Day 100% Money-Back Guarantee)</span></span></p>
<p><strong>Best Value Pack - </strong>Buy 6 JAR Supply of WellMe Collagen Refresh USD 39/JAR + FREE SHIPPING <span style="color: red;"><span style="color: #38761d;">(180-Day 100% Money-Back Guarantee)</span></span></p>
<p style="-webkit-text-stroke-width: 0px; background-color: white; box-sizing: border-box; color: #343434; font-family: kepler-std, serif; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: 300; letter-spacing: normal; line-height: 29px; margin: 0px 0px 15px; padding: 0px; text-align: start; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><span style="font-size: medium;"><strong><span style="background-color: #ffe599;"><em><span style="font-family: georgia;"><strong><span style="color: red;">ONLY FOR 1ST USER GET COLLAGEN REFRESH AND 50% OFF CLICK HERE AND CLAIM YOUR JAR</span></strong></span></em></span></strong></span></a></p>
<p><span style="font-family: times;"><span style="font-size: medium;"><span style="color: red;">Good News: Get additional discount on shipping when you checkout with Mastercard or Discover card!</span></span></span></p>
<div class="separator" style="clear: both; text-align: center;">
<p style="text-align: left;"><span style="font-size: medium;"><a style="clear: left; float: left; margin-bottom: 1em; margin-left: 1em;" href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><img src="https://blogger.googleusercontent.com/img/a/AVvXsEgJqDXBj2s2sKgxhjLGKnDNPxD392fUjUkF8lQbqbuoFZwPHnPE27muXA18Hs1EzbsUHHsPlOR9Njx119fwMPFiCrLv9NlRRfEUdLPeIVlqZmqjexv1dJ0pMoSO6VUtSY89rewM_LiPyGpkGpNCHHdprDSvrWyt6MprtcceNFal6bdDPK_FyvLHnQzy-A" alt="" width="110" height="120" border="0" data-original-height="120" data-original-width="110" /></a><span style="font-family: helvetica;"><span style="font-size: small;"><strong><span style="color: red;">APPROVED!</span><br /></strong></span></span></span></p>
<p style="text-align: left;"><span style="font-family: helvetica;"><span style="font-size: small;">Limited supply available. We currently have product in stock and ready to ship within <span style="color: red;">24 hours</span>.</span></span></p>
</div>
<p><span style="font-family: helvetica;"><span style="font-size: small;"><strong><span style="color: red;">EXPIRE SOON</span></strong></span></span></p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhxEivja00Kz722bSoQwTThk0anPnGnVZHjG5kwxi45u0HYh2vllKE04p8FoEKa_OP-Oy91UlnfItwkLfQEFYPe1L6fjWNYHZFBJW2EUr7V8iPb84RTw2O3DYqg2K7Y5BJvFXhtEzayOMA1Nzm3an7iDf08khrRmNXXcJDDHkGonRwDXPYErzZK8H4-g/w400-h365/Collagen%20Refresh%2011.png" alt="" width="400" height="365" border="0" data-original-height="532" data-original-width="584" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSJvORHtAeEI3H2rypjo7v70Cm2j2tC1B-Ja0K1qVp1MEYhmISktm3oeSPvmtOjcgIp6VWYex2WQ2w6gsXFZPdis4AmxfwRGftHtwSK5PNs5-vJjhVZwsNY6SljpUWbanRSWMbVUibr78lOgAkjowIEGQGH8g4my7mrAF8bND5KSQ7K8qU9d1qadr8WA/w327-h97/btn.png" alt="" width="327" height="97" border="0" data-original-height="84" data-original-width="282" /></a></div>
<p style="text-align: center;">By submitting, you affirm to have read and agreed to our <a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><span style="color: red;">Terms & Conditions</span></a>.</p>
<p style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><span style="font-size: medium;"><span style="background-color: #ffe599;"><span style="color: red;"><span style="font-family: georgia;"><strong>[HUGE SAVINGS] Get Your WellMe Collagen Refresh “Get 50% OFF” Get 2+1 Offer Hurry Only For 1st User!!</strong></span></span></span></span></a></p>
<h2 style="text-align: left;"><strong>Final Verdict On <a href="https://sway.office.com/SGCNissVhFgfluSM?ref=Link&loc=mysways">WellMe Collagen Refresh</a> Reviews</strong></h2>
<p>After examining the different aspects of WellMe Collagen Refresh, it is safe to say that it is a supplement that is effective in doing what it claims to do. It uses natural ingredients that are proven to promote the production of collagen in the body.</p>
<p>This in turn supports your joints. The manufacturing of the supplement is done in standard and precise conditions making it free from the risk of side effects.</p>
<p>Apart from supporting the joints, it may also help in improving many skin conditions. It can promote the health of the skin and boost energy. Compared to the other similar products available in the market WellMe Collagen Refresh, is available at a much more affordable price.</p>
<p>Taking all these into consideration, <a href="https://bitbucket.org/wellme-collagen-refresh/wellme-collagen-refresh/issues/1/wellme-collagen-refresh-is-a-unique">WellMe Collagen Refresh</a> seems to be an effective product.</p>
<p class="ql-align-center" style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHYxn3NMPQqsmKG54OjkQESM8dw8D7zUXtssdLHaaWSYArzmNucZfEfKCOBsnUqZdp6i-enO0zDWtMGF2pKG2MifoTldIDExJOBDWxicPkSeox29VCmqX6Cz2feNaSfYBnC_BHUdfPT1qUGVgSNyn0NtyKxY-V-M-BDbo5jCOW4qSuxwu3TOTA3dSjIQ/s1600/Screenshot%20(1445).png" alt="" width="320" height="114" /></a></p>
<p class="ql-align-center" style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh"><strong>Terms and Conditions</strong></a><strong> | </strong><strong><a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh">Privacy</a></strong><strong> | </strong><strong><a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh">Contact Us</a></strong></p>
<p class="ql-align-center" style="text-align: center;"><strong>© 2023 <a href="https://www.healthsupplement24x7.com/get-wellme-collagen-refresh">WellMe Collagen Refresh</a></strong><strong>. All Rights Reserved.</strong></p>
<p class="ql-align-center" style="text-align: left;"><strong>Read More:</strong></p>
<p class="ql-align-center" style="text-align: left;"><strong><a href="https://www.flowcode.com/page/wellmecollagenrefreshprice">https://www.flowcode.com/page/wellmecollagenrefreshprice</a></strong></p>
<p class="ql-align-center" style="text-align: left;"><strong><a href="https://www.dibiz.com/wellmecollagenrefreshusa">https://www.dibiz.com/wellmecollagenrefreshusa</a></strong></p>
<p class="ql-align-center" style="text-align: left;"><strong><a href="https://hackmd.io/@wellmecollagenrefreshusa/WellMe-Collagen-Refresh">https://hackmd.io/@wellmecollagenrefreshusa/WellMe-Collagen-Refresh</a></strong></p>
<p class="ql-align-center" style="text-align: left;"><strong><a href="https://www.crowdcast.io/c/wellme-collagen-refresh">https://www.crowdcast.io/c/wellme-collagen-refresh</a></strong></p>
<p class="ql-align-center" style="text-align: left;"><strong><a href="https://wellmecollagenrefresh.bandcamp.com/track/wellme-collagen-refresh-is-a-unique-approach-to-women-supporting-joint-health-skin-and-daily-energy-spam-or-legit">https://wellmecollagenrefresh.bandcamp.com/track/wellme-collagen-refresh-is-a-unique-approach-to-women-supporting-joint-health-skin-and-daily-energy-spam-or-legit</a></strong></p> |
george-chou/AAL-statistics-volumn | ---
license: mit
---
# Dataset Card for "george-chou/AAL-statistics-volumn"
## Usage
```
from datasets import load_dataset
data = load_dataset("george-chou/AAL-statistics-volumn",
data_files='AAL_statistics_volumn_labelled.csv', split='train')
for item in data:
print(item)
```
## Maintenance
```
git clone git@hf.co:datasets/george-chou/AAL-statistics-volumn
``` |
trycortexireviews/Cortexi | Cortexi Hearing Support Formula: Can it Improve Your Hearing?
Hearing loss is a common problem among older adults, but it can affect people of all ages. While there are many causes of hearing loss, including genetics, noise exposure, and medication side effects, there are also various ways to prevent and manage it.
One of the options available is the use of dietary supplements, such as Cortexi Hearing. In this article, we will explore what Cortexi for tinnitus is, how it works, and whether it can truly improve your hearing.
What is Cortexi?
Cortexi Tinnitus is a dietary supplement that claims to support and enhance hearing health. It is marketed as an all-natural, science-based formula that can help improve hearing sensitivity, reduce tinnitus, and protect against hearing damage caused by environmental factors.
Click Here To Order Cortexi From The Official Website
According to the manufacturer, Cortexi Hearing is formulated using a proprietary blend of vitamins, minerals, and herbal extracts that work synergistically to nourish the auditory system and promote optimal hearing.
How does Cortexi Drops work?
It is designed to work by targeting the root causes of hearing loss, which include oxidative stress, inflammation, and damage to the hair cells in the inner ear. The ingredients in Cortexi Hearing are chosen for their antioxidant, anti-inflammatory, and neuroprotective properties, which help to reduce cellular damage and support healthy neural function.
By providing the body with the nutrients it needs to repair and regenerate damaged tissue, It may be able to slow down or reverse the progression of hearing loss.
What are the ingredients of Cortexi Hearing Support Formula?
The ingredients in Cortexi include:
• Vitamin C
• Vitamin E
• Vitamin B6
• Zinc
• Copper
• Manganese
Place your order today before stock runs out!
• Folate
• L-Taurine
• N-Acetyl L-Cysteine
• Alpha Lipoic Acid
• Gingko Biloba Extract
• L-Arginine
Each of these ingredients has a specific role in promoting hearing health. For example, vitamin C and E are antioxidants that protect against oxidative stress, while Gingko Biloba Extract is known for its ability to improve blood flow and enhance cognitive function.
Are there any side effects of Cortexi Drops?
Try Cortexi is generally considered safe and well-tolerated by most people. However, as with any dietary supplement, there is a risk of side effects. Some of the reported side effects of Cortexi supplement include stomach upset, headache, and dizziness.
It is important to follow the recommended dosage and to speak with a healthcare provider before taking Cortexi if you have any underlying medical conditions or are taking medication.
Is Cortexi effective in improving hearing?
The effectiveness of Cortexi in improving hearing is not fully understood. While some of the ingredients in Cortexi have been shown to have potential benefits for hearing health, such as Gingko Biloba Extract and L-Taurine, there is limited research on the formula as a whole.
Click Here and Order Your Supplement at the Best Market Price!
Additionally, the results of studies on individual ingredients may not be applicable to the combination of the ingredients in Cortexi. Some customer reviews suggest that it has helped improve their hearing sensitivity and reduced their tinnitus symptoms. However, more scientific studies are needed to determine the true effectiveness of Cortexi.
Who can benefit from Cortexi?
Cortex hearing support formula may be beneficial for individuals who are experiencing hearing loss or tinnitus symptoms, as well as those who want to support their overall hearing health. It may also be beneficial for individuals who are at risk of hearing damage, such as those who are exposed to loud noises or have a family history of hearing loss.
Can diet and lifestyle changes improve hearing?
In addition to dietary supplements, there are various lifestyle changes that can help support hearing health, such as avoiding exposure to loud noises, wearing ear protection, maintaining a healthy diet, and exercising regularly. It is important to consult with a healthcare provider before making any significant changes to your lifestyle.
Are there any risks associated with taking dietary supplements?
While dietary supplements are generally considered safe, there is a risk of side effects and interactions with other medications. It is important to speak with a healthcare provider before taking any dietary supplement, especially if you have underlying medical conditions or are taking medication.
Place your order today by clicking here before stock runs out!
How long does it take to see results from Cortexi Hearing Drops?
The time it takes to see results from Cortexi may vary depending on the individual and the severity of their hearing loss or tinnitus symptoms. It is important to take Cortexi as directed and to be patient, as it may take several weeks or months to see significant results.
What are the customer reviews of Cortexi?
Customer reviews of Cortexi Hearing Supplement are mixed, with some individuals reporting significant improvements in their hearing sensitivity and reduction in tinnitus symptoms, while others report no improvement or negative side effects. It is important to read reviews with a critical eye and to speak with a healthcare provider before starting any new supplement.
How & Where to buy Cortexi?
Cortexi can be purchased online through the official website or through various online retailers. It is important to purchase from a reputable source to ensure that you are getting a genuine product.
Conclusion:
Cortexi is a dietary supplement that claims to support and enhance hearing health. While there is limited scientific research on the effectiveness of the formula as a whole, some of the individual ingredients have shown potential benefits for hearing health.
It is important to speak with a healthcare provider before taking Cortexi or any dietary supplement, and to consider lifestyle changes as an alternative or complementary approach to supporting hearing health.
https://sites.google.com/view/cortexi-hearing-review/home
https://cortexi-review.blogspot.com/2023/04/cortexi-reviews-is-cortexi-tinnitus.html
https://www.apsense.com/article/cortexi-reviews-is-cortexi-tinnitus-a-scam-or-legit.html
https://www.tumblr.com/cortexi-review/715551689471574016/cortexi-reviews-is-cortexi-tinnitus-a-scam-or
https://www.yeuthucung.com/threads/cortexi-reviews-is-cortexi-tinnitus-a-scam-or-legit.287663/
https://www.scoop.it/topic/cortexi-by-cortexi-reviews-52/p/4142966714/2023/04/25/cortexi-reviews-is-cortexi-tinnitus-a-scam-or-legit
https://www.remotehub.com/jason.huggins
https://soundcloud.com/cortix-review
https://cortixreview.hashnode.dev/cortexi-reviews-is-cortexi-tinnitus-a-scam-or-legit
https://thetaxtalk.com/questions/question/cortexi-reviews-is-cortexi-tinnitus-a-scam-or-legit/
https://bazar.bg/obiava-40477724/cortexi-reviews-is-cortexi-tinnitus-a-scam-or-legit
https://www.scoop.it/topic/cortexi-hearing-support-formula-by-cortexi-hearing/p/4142830217/2023/04/19/cortexi-hearing-support-formula-reviews-is-cortexi-legit-or-a-scam
https://trycortexireviews.hashnode.dev/cortexi-reviews-is-cortexi-for-tinnitus-a-scam-or-legit
|
metaeval/universal-joy | ---
license: gpl
task_categories:
- text-classification
tags:
- multilingual
- emotion
---
```bib
@inproceedings{lamprinidis2021universal,
title={Universal Joy A Dataset and Results for Classifying Emotions Across Languages},
author={Lamprinidis, Sotiris and Bianchi, Federico and Hardt, Daniel and Hovy, Dirk},
year={2021},
volume={11th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA 2021)}
organization={Association for Computational Linguistics}
}
``` |
reginaboateng/pubmedlongtokenised | ---
dataset_info:
features:
- name: article
dtype: string
- name: abstract
dtype: string
- name: input_ids
sequence: int32
- name: target_ids
sequence: int64
splits:
- name: train
num_bytes: 2627110875
num_examples: 119924
- name: validation
num_bytes: 148141199
num_examples: 6633
download_size: 1251052672
dataset_size: 2775252074
---
# Dataset Card for "pubmedlongtokenised"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ashokpoudel/personal | ---
license: unknown
---
|
arvisioncode/donut-funsd | ---
dataset_info:
features:
- name: ground_truth
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 25994868.0
num_examples: 147
- name: test
num_bytes: 9129119.0
num_examples: 47
- name: validation
num_bytes: 9129119.0
num_examples: 47
download_size: 44182619
dataset_size: 44253106.0
---
# Dataset Card for "donut-funsd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tasksource/oasst1_dense_flat | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: parent_id
dtype: string
- name: user_id
dtype: string
- name: created_date
dtype: string
- name: text
dtype: string
- name: role
dtype: string
- name: lang
dtype: string
- name: review_count
dtype: int32
- name: review_result
dtype: bool
- name: deleted
dtype: bool
- name: rank
dtype: float64
- name: synthetic
dtype: bool
- name: model_name
dtype: 'null'
- name: detoxify
struct:
- name: identity_attack
dtype: float64
- name: insult
dtype: float64
- name: obscene
dtype: float64
- name: severe_toxicity
dtype: float64
- name: sexual_explicit
dtype: float64
- name: threat
dtype: float64
- name: toxicity
dtype: float64
- name: message_tree_id
dtype: string
- name: tree_state
dtype: string
- name: emojis
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: labels
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: value
sequence: float64
- name: parent_text
dtype: string
- name: spam
dtype: float64
- name: fails_task
dtype: float64
- name: lang_mismatch
dtype: float64
- name: pii
dtype: float64
- name: not_appropriate
dtype: float64
- name: hate_speech
dtype: float64
- name: sexual_content
dtype: float64
- name: quality
dtype: float64
- name: toxicity
dtype: float64
- name: humor
dtype: float64
- name: helpfulness
dtype: float64
- name: creativity
dtype: float64
- name: violence
dtype: float64
splits:
- name: train
num_bytes: 59657796
num_examples: 34059
- name: validation
num_bytes: 3164029
num_examples: 1816
download_size: 25173939
dataset_size: 62821825
---
# Dataset Card for "oasst1_dense_flat"
OASST1 dataset [https://huggingface.co/datasets/OpenAssistant/oasst1]
But where with retrieved parent_text, and where we only keep messages with dense annotations (all labels have 2 annotators)
```python
from datasets import Dataset, DatasetDict
d={}
for split in ['train','validation']:
df=load_dataset("OpenAssistant/oasst1")[split].to_pandas()
m2t=df.set_index("message_id")['text'].to_dict()
df['parent_text']=df.parent_id.map(lambda x: m2t.get(x,''))
df=df[df.labels.map(lambda x:x!=None)]
df=df[df.labels.map(lambda x:x['count'].min()>2)]
labels=df.labels.map(lambda x:list(x['name'])).value_counts().index[0]
df=df[df.labels.map(lambda x:x!=None)]
df=df[df.labels.map(lambda x:list(x['name'])==labels)]
for label in labels:
df[label]=df.labels.map(lambda x: x['value'][list(x['name']).index(label)])
d[split]=Dataset.from_pandas(df,preserve_index=False)
DatasetDict(d).push_to_hub('oasst1_dense_flat')
``` |
Iacan/teste | ---
license: openrail
---
|
james-burton/fake_job_postings2 | ---
dataset_info:
features:
- name: title
dtype: string
- name: salary_range
dtype: string
- name: description
dtype: string
- name: required_experience
dtype: string
- name: required_education
dtype: string
- name: fraudulent
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 14551021.06907662
num_examples: 10816
- name: validation
num_bytes: 2568222.930923379
num_examples: 1909
- name: test
num_bytes: 4359176
num_examples: 3182
download_size: 12161172
dataset_size: 21478420.0
---
# Dataset Card for "fake_job_postings2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PaulineSanchez/Translation_words_and_sentences_english_french | ---
task_categories:
- translation
language:
- en
- fr
tags:
- words
- sentences
- everyday life
- casual
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
This dataset is a clean version (all NanN removed) of this dataset : https://www.kaggle.com/datasets/devicharith/language-translation-englishfrench . I'm not the person who posted it first on Kaggle.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
rgres/AerialDreams | ---
dataset_info:
features:
- name: image
dtype: image
- name: segmentation
dtype: image
- name: metadata
struct:
- name: address
struct:
- name: ISO3166-2-lvl4
dtype: string
- name: ISO3166-2-lvl6
dtype: string
- name: ISO3166-2-lvl7
dtype: string
- name: aerialway
dtype: string
- name: aeroway
dtype: string
- name: allotments
dtype: string
- name: amenity
dtype: string
- name: bridge
dtype: string
- name: building
dtype: string
- name: city
dtype: string
- name: city_district
dtype: string
- name: club
dtype: string
- name: commercial
dtype: string
- name: country
dtype: string
- name: country_code
dtype: string
- name: county
dtype: string
- name: craft
dtype: string
- name: emergency
dtype: string
- name: farm
dtype: string
- name: farmyard
dtype: string
- name: hamlet
dtype: string
- name: healthcare
dtype: string
- name: highway
dtype: string
- name: historic
dtype: string
- name: house_number
dtype: string
- name: industrial
dtype: string
- name: isolated_dwelling
dtype: string
- name: junction
dtype: string
- name: landuse
dtype: string
- name: leisure
dtype: string
- name: locality
dtype: string
- name: man_made
dtype: string
- name: military
dtype: string
- name: municipality
dtype: string
- name: natural
dtype: string
- name: neighbourhood
dtype: string
- name: office
dtype: string
- name: place
dtype: string
- name: postcode
dtype: string
- name: province
dtype: string
- name: quarter
dtype: string
- name: railway
dtype: string
- name: region
dtype: string
- name: residential
dtype: string
- name: retail
dtype: string
- name: road
dtype: string
- name: shop
dtype: string
- name: square
dtype: string
- name: state
dtype: string
- name: state_district
dtype: string
- name: suburb
dtype: string
- name: tourism
dtype: string
- name: town
dtype: string
- name: village
dtype: string
- name: boundingbox
sequence: string
- name: camera
dtype: string
- name: code
dtype: string
- name: date
dtype: string
- name: dimensions
sequence: int64
- name: display_name
dtype: string
- name: domain
dtype: string
- name: lat
dtype: string
- name: latlong
sequence: float64
- name: licence
dtype: string
- name: lon
dtype: string
- name: origin
sequence: float64
- name: osm_id
dtype: int64
- name: osm_type
dtype: string
- name: patch_centroid_x
dtype: float64
- name: patch_centroid_y
dtype: float64
- name: patch_centroid_z
dtype: float64
- name: place_id
dtype: int64
- name: time
dtype: string
- name: unit_system
dtype: string
- name: zone
dtype: string
- name: class_rep
struct:
- name: class_repartition
struct:
- name: agricultural land
dtype: float64
- name: bare soil
dtype: float64
- name: brushwood
dtype: float64
- name: building
dtype: float64
- name: clear cut
dtype: float64
- name: coniferous
dtype: float64
- name: deciduous
dtype: float64
- name: greenhouse
dtype: float64
- name: herbaceous vegetation
dtype: float64
- name: impervious surface
dtype: float64
- name: ligneous
dtype: float64
- name: mixed
dtype: float64
- name: other
dtype: float64
- name: pervious surface
dtype: float64
- name: plowed land
dtype: float64
- name: snow
dtype: float64
- name: swimming pool
dtype: float64
- name: vineyard
dtype: float64
- name: water
dtype: float64
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 26871816438.736
num_examples: 61712
download_size: 26754074849
dataset_size: 26871816438.736
tags:
- IGN
- art
- StableDiffusion
size_categories:
- 10K<n<100K
---
# Dataset Card for "AerialDreams"
Dataset for the AerialDreams model: https://github.com/RubenGres/AerialDreams
<!---
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
---> |
tlhcelik/test_dataset | ---
license: wtfpl
task_categories:
- text-classification
language:
- en
- tr
tags:
- legal
size_categories:
- 1K<n<10K
--- |
mlnchk/CL_nature | ---
license: mit
---
|
UchihaMadara/train-model1-dataset | ---
dataset_info:
features:
- name: text
dtype: string
- name: sentiments
sequence: int64
- name: food
sequence: int64
- name: service
sequence: int64
- name: price
sequence: int64
- name: ambience
sequence: int64
- name: anecdotes/miscellaneous
sequence: int64
splits:
- name: train
num_bytes: 2367682
num_examples: 3043
download_size: 213398
dataset_size: 2367682
---
# Dataset Card for "train-model1-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
UchihaMadara/validate-dataset-model1 | ---
dataset_info:
features:
- name: text
dtype: string
- name: sentiments
sequence: int64
splits:
- name: train
num_bytes: 98465
num_examples: 800
download_size: 44572
dataset_size: 98465
---
# Dataset Card for "validate-dataset-model1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
james-burton/kick_starter_funding | ---
dataset_info:
features:
- name: name
dtype: string
- name: desc
dtype: string
- name: goal
dtype: float64
- name: keywords
dtype: string
- name: disable_communication
dtype: bool
- name: country
dtype: string
- name: currency
dtype: string
- name: deadline
dtype: int64
- name: created_at
dtype: int64
- name: final_status
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 20192646.60708423
num_examples: 73526
- name: validation
num_bytes: 3563634.392915771
num_examples: 12976
- name: test
num_bytes: 5935728
num_examples: 21626
download_size: 0
dataset_size: 29692009.0
---
# Dataset Card for "kick_starter_funding"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
houck2040/research_news | ---
license: mit
---
|
nor-violet7/vits_chinese | ---
license: openrail
---
|
james-burton/jigsaw_unintended_bias100K | ---
dataset_info:
features:
- name: comment_text
dtype: string
- name: asian
dtype: float64
- name: atheist
dtype: float64
- name: bisexual
dtype: float64
- name: black
dtype: float64
- name: buddhist
dtype: float64
- name: christian
dtype: float64
- name: female
dtype: float64
- name: heterosexual
dtype: float64
- name: hindu
dtype: float64
- name: homosexual_gay_or_lesbian
dtype: float64
- name: intellectual_or_learning_disability
dtype: float64
- name: jewish
dtype: float64
- name: latino
dtype: float64
- name: male
dtype: float64
- name: muslim
dtype: float64
- name: other_disability
dtype: float64
- name: other_gender
dtype: float64
- name: other_race_or_ethnicity
dtype: float64
- name: other_religion
dtype: float64
- name: other_sexual_orientation
dtype: float64
- name: physical_disability
dtype: float64
- name: psychiatric_or_mental_illness
dtype: float64
- name: transgender
dtype: float64
- name: white
dtype: float64
- name: funny
dtype: int64
- name: wow
dtype: int64
- name: sad
dtype: int64
- name: likes
dtype: int64
- name: disagree
dtype: int64
- name: target
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 46979913.1
num_examples: 85000
- name: validation
num_bytes: 8290572.9
num_examples: 15000
- name: test
num_bytes: 13825536
num_examples: 25000
download_size: 29047323
dataset_size: 69096022.0
---
# Dataset Card for "jigsaw_unintended_bias100K"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nouman-10/americas_nlp_es-en | ---
dataset_info:
features:
- name: id
dtype: int64
- name: translation
struct:
- name: es
dtype: string
- name: quy
dtype: string
splits:
- name: train
num_bytes: 18739855.22089074
num_examples: 97785
- name: dev
num_bytes: 2082397.779109258
num_examples: 10866
download_size: 14006500
dataset_size: 20822253.0
---
# Dataset Card for "americas_nlp_es-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anjunhu/CuPL_DaVinci_captioned_CUB2002011_test | ---
dataset_info:
features:
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 158722958.75
num_examples: 5794
download_size: 158383326
dataset_size: 158722958.75
---
# Dataset Card for "CuPL_DaVinci_captioned_CUB2002011_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CVasNLPExperiments/VQAv2_validation_no_image_google_flan_t5_xxl_mode_A_T_D_PNP_FILTER_C_Q_rices_ns_100 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: true_label
sequence: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random_
num_bytes: 13896
num_examples: 100
download_size: 0
dataset_size: 13896
---
# Dataset Card for "VQAv2_validation_no_image_google_flan_t5_xxl_mode_A_T_D_PNP_FILTER_C_Q_rices_ns_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nouman-10/americas_nlp_es-en_bcktr | ---
dataset_info:
features:
- name: id
dtype: int64
- name: translation
struct:
- name: es
dtype: string
- name: quy
dtype: string
splits:
- name: train
num_bytes: 3438104.804907495
num_examples: 27582
- name: dev
num_bytes: 382053.19509250496
num_examples: 3065
download_size: 2669233
dataset_size: 3820158.0
---
# Dataset Card for "americas_nlp_es-en_bcktr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
awghuku/thai_ser | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: label
dtype:
class_label:
names:
'0': Anger
'1': Frustration
'2': Happiness
'3': Neutral
'4': Sadness
splits:
- name: train
num_bytes: 2977334910.978
num_examples: 14231
download_size: 2883049328
dataset_size: 2977334910.978
---
# Dataset Card for "thai_ser"
[ORIGINAL DATASET HERE](https://github.com/vistec-AI/dataset-releases/releases/tag/v1)
AI Research Institute of Thailand (AIResearch), with the collaboration between Vidyasirimedhi Institute of Science and Technology (VISTEC) and Digital Economy Promotion Agency (depa), cooperating with Department of Computer Engineering - Faculty of Engineering and Department of Dramatic Arts - Faculty of Arts, Chulalongkorn University, publishes an open Thai speech emotion recognition dataset, with the sponsorship from Advanced Info Services Public Company Limited (AIS), namely THAI SER.
This dataset consists of 5 main emotions assigned to actors: Neutral, Anger, Happiness, Sadness, and Frustration. The recordings were 41 hours, 36 minutes long (27,854 utterances), and were performed by 200 professional actors (112 female, 88 male) and directed by students, former alumni, and professors from the Faculty of Arts, Chulalongkorn University.
The THAI SER contains 100 recordings and is separated into two main categories: Studio and Zoom. Studio recordings also consist of two studio environments: Studio A, a controlled studio room with soundproof walls, and Studio B, a normal room without soundproof or noise control. Thus the recording environment can be concluded as follows:
```
StudioA (noise controlled, soundproof wall)
└─ studio001
└─ studio002
...
└─ studio018
StudioB (Normal room without soundproof wall)
└─ studio019
└─ studio020
...
└─ studio080
Zoom (Recorded online via Zoom and Zencastr)
└─ zoom001
└─ zoom002
...
└─ zoom020
```
Each recording is separated into two sessions: Script Session and Improvisation Session.
To mapped each utterance to an emotion, we use majority voted of answer from 3-8 annotators which collected from crowdsourcing (wang.in.th).
Script session
In the script session, the actor was assigned three sentences:
```
sentence 1: พรุ่งนี้มันวันหยุดราชการนะรู้รึยัง หยุดยาวด้วย
(Do you know tomorrow is a public holiday and it's the long one.)
sentence 2: อ่านหนังสือพิมพ์วันนี้รึยัง รู้ไหมเรื่องนั้นกลายเป็นข่าวใหญ่ไปแล้ว
(Have you read today's newspaper, that story was the topliner.)
sentence 3: ก่อนหน้านี้ก็ยังเห็นทำตัวปกติดี ใครจะไปรู้หล่ะ ว่าเค้าคิดแบบนั้น
(He/She was acting normal recently, who would thought that he/she would think like that.)
```
The actor was asked to speak each sentence two times for each emotion with two emotional intensity levels (normal, strong), with an additional neutral expression.
Improvisation session
For the Improvisation session, two actors were asked to improvised according to provided emotion and scenario.
```
Scenarios Actor A Actor B
1 (Neutral) A hotel receptionist trying to explain and service the customer (Angry) A angry customer who dissatisfy the hotel services
2 (Happy) A person excitingly talking with B about his/her marriage plan (Happy) A person happily talking with A and help him/her plan his ceremony
3 (Sad) A patient feeling depressed (Neutral) A doctor attempting to talk with A neutrally
4 (Angry) A furious boss talking with the employee (Frustrated) A frustrated person attempting to argue with his/her boss
5 (Frustrated) A person frustratingly talk about another person's action (Sad) A person feeling guilty and sad about his/her action
6 (Happy) A happy hotel staffs (Happy) Happy customer
7 (Sad) A sad person who felt unsecured about the incoming marriage (Frustrated) A person who frustrated about another person's insecureness
8 (Frustrated) A frustrated patience (Neutral) A Doctor talking with the patience
9 (Neutral) A worker who assigned to tell his/her co-worker about the company's bad situation (Sad) An employee feeling sad after listenning
10 (Angry) A person raging about another person's behavior (Angry) A person who feels like being blamed by another person
11 (Frustrated) A director who unsatisfied co-worker (Frustrated) A frustrated person who try their best on the job
12 (Happy) A person who gets a new job or promotion (Sad) A person who desperate in his/her job
13 (Neutral) A patient inquire information (Happy) A happy doctor telling his/her patience more information
14 (Angry) A person who upset with his/her work (Neutral) A calm friend who listened to another person's problem
15 (Sad) A person sadly tell another person about a relationship (Angry) A person who feels angry after listening to another person's bad relationship
```
File naming convention
Each of files has a unique filename, provided in .flac format with sample rate about 44.1 KHz. The filename consists of a 5 to 6-part identifier (e.g., s002_clip_actor003_impro1_1.flac, s002_clip_actor003_script1_1_1a.flac). These identifiers define the stimulus characteristics:
File Directory Management
```
studio (e.g., studio1-10)
└─ <studio-num> (studio1, studio2, ...)
└─ <mic-type> (con, clip, middle)
└─<audio-file> (.flac)
zoom (e.g., zoom1-10)
└─ <zoom-num> (zoomo1, zoom2, ...)
└─ <mic-type> (mic)
└─ <audio-file> (.flac)
```
Filename identifiers
```
Recording ID (s = studio recording, z = zoom recording)
Number of recording (e.g., s001, z001)
Microphone type (clip, con, middle, mic)
Zoom recording session
mic = An actor's microphone-of-choice
studio recording session
con = Condenser microphone (Cardioid polar patterns) which was placed 0.5m from the actor setting
clip = Lavalier microphone (Omni-directional patterns) attached to the actor’s shirt collar
middle = Condenser microphone (Figure-8 polar patterns) which was placed between actors
Actor ID (actor001 to actor200: Odd-numbered actors are Actor A, even-numbered actors are Actor B in improvisation session).
Session ID (impro = Improvisation Session, script = Script Session)
Script Session (e.g., _script1_1_1a)
Sentence ID (script1-script3)
Repetition (1 = 1st repetition, 2 = 2nd repetition)
Emotion (1 = Neutral, 2 = Angry, 3 = Happy, 4 = Sad, 5 = Frustrated)
Emotional intensity (a = Normal, b = Strong)
Improvisation Session (e.g., _impro1_1)
Scenario ID (impro1-15)
Utterance no. (e.g., _impro1_1 , _impro1_2)
Filename example: s002_clip_actor003_impro1_1.flac
Studio recording number 2 (s002)
Recording by Lavalier microphone (clip)
3rd Actor (actor003)
Improvisation session, scenario 1 (impro1)
1st utterance of scenario recording (1)
Other Files
emotion_label.json - a dictionary for recording id, assigned emotion (assigned_emo), majority emotion (emotion_emo), annotated emotions from crowdsourcing (annotated), and majority agreement score (agreement)
actor_demography.json - a dictionary that contains information about the age and sex of actors.
```
Version
```
Version 1 (26 March 2021): Thai speech emotion recognition dataset THAI SER contains 100 recordings (80 studios and 20 zooms) which is 41 hours 36 minutes long which contain 27,854 utterances and be labeled 27,854 utterances.
Dataset statistics
Recording environment Session Number of utterances Duration(hrs)
Zoom (20) Script 2,398 4.0279
Improvisation 3,606 5.8860
Studio (80) Script 9,582 13.6903
Improvisation 12,268 18.0072
Total (100) 27,854 41.6114
```
Dataset sponsorship and license
Advanced Info Services Public Company Limited
This work is published under a Creative Commons BY-SA 4.0 |
james-burton/wine_reviews | ---
dataset_info:
features:
- name: country
dtype: string
- name: description
dtype: string
- name: points
dtype: int64
- name: price
dtype: float64
- name: province
dtype: string
- name: variety
dtype:
class_label:
names:
'0': Bordeaux-style Red Blend
'1': Bordeaux-style White Blend
'2': Cabernet Franc
'3': Cabernet Sauvignon
'4': Champagne Blend
'5': Chardonnay
'6': Gamay
'7': Gewürztraminer
'8': Grüner Veltliner
'9': Malbec
'10': Merlot
'11': Nebbiolo
'12': Pinot Grigio
'13': Pinot Gris
'14': Pinot Noir
'15': Portuguese Red
'16': Portuguese White
'17': Red Blend
'18': Rhône-style Red Blend
'19': Riesling
'20': Rosé
'21': Sangiovese
'22': Sauvignon Blanc
'23': Shiraz
'24': Sparkling Blend
'25': Syrah
'26': Tempranillo
'27': Viognier
'28': White Blend
'29': Zinfandel
splits:
- name: train
num_bytes: 21014061.962412182
num_examples: 71504
- name: validation
num_bytes: 3708554.0375878178
num_examples: 12619
- name: test
num_bytes: 6181444
num_examples: 21031
download_size: 16227253
dataset_size: 30904060.0
---
# Dataset Card for "wine_reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CVasNLPExperiments/OxfordFlowers_test_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_1000 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_ViT_L_14_Attributes_ViT_L_14_text_davinci_003_full_clip_tags_ViT_L_14_simple_specific_rices
num_bytes: 444513
num_examples: 1000
download_size: 50458
dataset_size: 444513
---
# Dataset Card for "OxfordFlowers_test_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zetavg/mlqa_en_zh_tw | ---
license: cc-by-3.0
task_categories:
- question-answering
- translation
language:
- zh
- en
size_categories:
- 1K<n<10K
pretty_name: MLQA en-zh_tw
---
[MLQA (MultiLingual Question Answering)](https://github.com/facebookresearch/mlqa) 中英雙語問答資料集,為原始 MLQA 資料集轉換為台灣正體中文的版本,並將中文與英語版本的相同項目合併,方便供雙語語言模型使用。(致謝:[BYVoid/OpenCC](https://github.com/BYVoid/OpenCC)、[vinta/pangu.js](https://github.com/vinta/pangu.js))
分為 `dev` 以及 `test` 兩個 split,各有 302 及 2986 組資料。
範本:
```json
[
{
"title": {
"en": "Curling at the 2014 Winter Olympics",
"zh_tw": "2014 年冬季奧林匹克運動會冰壺比賽"
},
"paragraphs": [
{
"context": {
"en": "Qualification to the curling tournaments at the Winter Olympics was determined through two methods. Nations could qualify teams by earning qualification points from performances at the 2012 and 2013 World Curling Championships. Teams could also qualify through an Olympic qualification event which was held in the autumn of 2013. Seven nations qualified teams via World Championship qualification points, while two nations qualified through the qualification event. As host nation, Russia qualified teams automatically, thus making a total of ten teams per gender in the curling tournaments.",
"zh_tw": "本屆冬奧會冰壺比賽參加資格有兩種辦法可以取得。各國家或地區可以透過 2012 年和 2013 年的世界冰壺錦標賽,也可以透過 2013 年 12 月舉辦的一次冬奧會資格賽來取得資格。七個國家透過兩屆世錦賽積分之和來獲得資格,兩個國家則透過冬奧會資格賽。作為主辦國,俄羅斯自動獲得參賽資格,這樣就確定了冬奧會冰壺比賽的男女各十支參賽隊伍。"
},
"qas": [
{
"id": "b08184972e38a79c47d01614aa08505bb3c9b680",
"question": {
"zh_tw": "俄羅斯有多少隊獲得參賽資格?",
"en": "How many teams did Russia qualify for?"
},
"answers": {
"en": [
{
"text": "ten teams",
"answer_start": 543
}
],
"zh_tw": [
{
"text": "十支",
"answer_start": 161
}
]
}
}
]
}
]
}
]
```
其餘資訊,詳見:https://github.com/facebookresearch/mlqa 。
## 原始資料集
https://github.com/facebookresearch/mlqa ,分別取其中 `dev` 與 `test` split 的 `context-zh-question-zh`、`context-zh-question-en`、`context-en-question-zh`,總共六個檔案。
## 轉換程序
1. 由 [OpenCC](https://github.com/BYVoid/OpenCC) 使用 `s2twp.json` 配置,將簡體中文轉換為台灣正體中文與臺灣常用詞彙。
2. 使用 Python 版本的 [pangu.js](https://github.com/vinta/pangu.js) 在中英文(全形與半形文字)之間加上空格。
3. 將中英文資料集中的相同項目進行合併。
關於轉換的詳細過程,請見:https://github.com/zetavg/LLM-Research/blob/bba5ff7/MLQA_Dataset_Converter_(en_zh_tw).ipynb 。
## 已知問題
* 有些項目的 `title`、`paragraph` 的 `context`、問題或是答案可能會缺少其中一種語言的版本。
* 部分問題與答案可能存在理解偏誤或歧異,例如上方所列範本「2014 年冬季奧林匹克運動會冰壺比賽」的問題「俄羅斯有多少隊獲得參賽資格?」與答案。
* `paragraph` 的 `context` 在不同語言的版本下可能長度與涵蓋的內容範圍有很大的落差。例如在 development split 中,`title` 為 “Adobe Photoshop” 的項目:
* `zh_tw` 只有兩句話:「Adobe Photoshop,簡稱 “PS”,是一個由 Adobe 開發和發行的影象處理軟體。該軟體釋出在 Windows 和 Mac OS 上。」
* 而 `en` 則是一個段落:“Adobe Photoshop is a raster graphics editor developed and published by Adobe Inc. for Windows and macOS. It was originally created in 1988 by Thomas and John Knoll. Since then, this software has become the industry standard not only in raster graphics editing, but in digital art as a whole. … (下略 127 字)” |