File size: 1,335 Bytes
37e886e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
library_name: transformers
license: apache-2.0
datasets:
- facebook/multilingual_librispeech
- speechcolab/gigaspeech
- mozilla-foundation/common_voice_15_0
- facebook/voxpopuli
- MLCommons/peoples_speech
language:
- en
base_model:
- mistralai/Mistral-7B-v0.1
pipeline_tag: audio-to-audio
---
# Model Card
<!-- Provide a quick summary of what the model is/does. -->
Unified Speech-Text Model (USTM), a speech-text cross-modal pretrained model obtained by further training Mistral-7B-v0.1 using a proposed unified speech-text pretraining methodology.
## Paralinguistics-Aware Speech-Empowered LLMs for Natural Conversation [NeurIPS 2024]
- **Repository:** https://github.com/naver-ai/usdm
- **Paper:** https://openreview.net/forum?id=NjewXJUDYq
- **Project Page:** https://unifiedsdm.github.io/
## BibTeX
```
@inproceedings{
kim2024paralinguisticsaware,
title={Paralinguistics-Aware Speech-Empowered Large Language Models for Natural Conversation},
author={Heeseung Kim and Soonshin Seo and Kyeongseok Jeong and Ohsung Kwon and Soyoon Kim and Jungwhan Kim and Jaehong Lee and Eunwoo Song and Myungwoo Oh and Jung-Woo Ha and Sungroh Yoon and Kang Min Yoo},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NjewXJUDYq}
}
``` |