--- dataset_info: features: - name: utterance dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 40588 num_examples: 431 - name: test num_bytes: 10031 num_examples: 108 download_size: 25385 dataset_size: 50619 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* task_categories: - text-classification language: - en --- # minds14 This is a text classification dataset. It is intended for machine learning research and experimentation. This dataset is obtained via formatting another publicly available data to be compatible with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html). ## Usage It is intended to be used with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html): ```python from autointent import Dataset minds14 = Dataset.from_datasets("AutoIntent/minds14") ``` ## Source This dataset is taken from `PolyAI/minds14` and formatted with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html): ```python from datasets import load_dataset from autointent import Dataset def convert_minds14(minds14_train): utterances = [] for batch in minds14_train.iter(batch_size=16, drop_last_batch=False): for txt, intent_id in zip(batch["english_transcription"], batch["intent_class"], strict=False): utterances.append({"utterance": txt, "label": intent_id}) return Dataset.from_dict({"train": utterances}) minds14 = load_dataset("PolyAI/minds14", "ru-RU", trust_remote_code=True) minds14_converted = convert_minds14(minds14["train"]) ```