Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
Arabic
Size:
10K - 100K
ArXiv:
License:
metadata
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 16496867
num_examples: 9500
- name: test
num_bytes: 887460
num_examples: 500
download_size: 11045465
dataset_size: 17384327
task_categories:
- text-generation
language:
- ar
pretty_name: لا روبوتات
license: cc-by-nc-4.0
Dataset Card for "No Robots" 🙅♂️🤖
Summary
"No Robots" is a dataset consisting of 10,000 instructions and demonstrations, created by professional annotators. It was translated using the Google Cloud Platform Translation API. This dataset can be used to train language models to follow instructions more accurately (instruction-tuned fine-tuning - SFT). The "No Robots" dataset was created based on the dataset described in OpenAI's InstructGPT paper, and includes the following categories:
Category | Count |
---|---|
Creation | 4560 |
Open Questions | 1240 |
Brainstorming | 1120 |
Chatting | 850 |
Rewriting | 660 |
Summarization | 420 |
Programming | 350 |
Classification | 350 |
Closed Questions | 260 |
Extraction | 190 |
Languages
This dataset is available in Arabic only. The original version in English can be found at this link, and the Turkish version at this link.
Data Fields
Columns as follows:
prompt
: Specifies the instruction that the model should follow.prompt_id
: A unique identifier.messages
: A list containing dictionaries, each dictionary describes a message (key: content) and who sent it (key: role).category
: The task category, I did not translate this.
Splits
train | test | |
---|---|---|
No Robots | 9500 | 500 |
License
The dataset is available under the (CC BY-NC 4.0) license.
Citation Information
@misc{no_robots,
author = {Nazneen Rajani and Lewis Tunstall and Edward Beeching and Nathan Lambert and Alexander M. Rush and Thomas Wolf},
title = {No Robots},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/datasets/HuggingFaceH4/no_robots}}
}