The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Leyzer: A Dataset for Multilingual Virtual Assistants

Leyzer is a multilingual text corpus designed to study multilingual and cross-lingual natural language understanding (NLU) models and the strategies of localization of virtual assistants. It consists of 20 domains across three languages: English, Spanish and Polish, with 186 intents and a wide range of samples, ranging from 1 to 672 sentences per intent. For more stats please refer to wiki.

Citation

If you use this model, please cite the following:

@inproceedings{kubis2023caiccaic,
    author={Marek Kubis and Paweł Skórzewski and Marcin Sowański and Tomasz Ziętkiewicz},
    pages={1319–1324},
    title={Center for Artificial Intelligence Challenge on Conversational AI Correctness},
    booktitle={Proceedings of the 18th Conference on Computer Science and Intelligence Systems},
    year={2023},
    doi={10.15439/2023B6058},
    url={http://dx.doi.org/10.15439/2023B6058},
    volume={35},
    series={Annals of Computer Science and Information Systems}
}
Downloads last month
41

Models trained or fine-tuned on cartesinus/leyzer-fedcsis