File size: 1,591 Bytes
92f1b7c 1b2a7b7 92f1b7c d8ee5e3 70ad028 d8ee5e3 1b2a7b7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
---
dataset_info:
features:
- name: utterance
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 50619
num_examples: 539
download_size: 21262
dataset_size: 50619
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-classification
language:
- en
---
# minds14
This is a text classification dataset. It is intended for machine learning research and experimentation.
This dataset is obtained via formatting another publicly available data to be compatible with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html).
## Usage
It is intended to be used with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html):
```python
from autointent import Dataset
minds14 = Dataset.from_datasets("AutoIntent/minds14")
```
## Source
This dataset is taken from `PolyAI/minds14` and formatted with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html):
```python
from datasets import load_dataset
from autointent import Dataset
def convert_minds14(minds14_train):
utterances = []
for batch in minds14_train.iter(batch_size=16, drop_last_batch=False):
for txt, intent_id in zip(batch["english_transcription"], batch["intent_class"], strict=False):
utterances.append({"utterance": txt, "label": intent_id})
return Dataset.from_dict({"train": utterances})
minds14 = load_dataset("PolyAI/minds14", "ru-RU", trust_remote_code=True)
minds14_converted = convert_minds14(minds14["train"])
``` |