USTM / README.md
Kang Min Yoo
Initial commit
37e886e
metadata
library_name: transformers
license: apache-2.0
datasets:
  - facebook/multilingual_librispeech
  - speechcolab/gigaspeech
  - mozilla-foundation/common_voice_15_0
  - facebook/voxpopuli
  - MLCommons/peoples_speech
language:
  - en
base_model:
  - mistralai/Mistral-7B-v0.1
pipeline_tag: audio-to-audio

Model Card

Unified Speech-Text Model (USTM), a speech-text cross-modal pretrained model obtained by further training Mistral-7B-v0.1 using a proposed unified speech-text pretraining methodology.

Paralinguistics-Aware Speech-Empowered LLMs for Natural Conversation [NeurIPS 2024]

BibTeX

@inproceedings{
  kim2024paralinguisticsaware,
  title={Paralinguistics-Aware Speech-Empowered Large Language Models for Natural Conversation},
  author={Heeseung Kim and Soonshin Seo and Kyeongseok Jeong and Ohsung Kwon and Soyoon Kim and Jungwhan Kim and Jaehong Lee and Eunwoo Song and Myungwoo Oh and Jung-Woo Ha and Sungroh Yoon and Kang Min Yoo},
  booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
  year={2024},
  url={https://openreview.net/forum?id=NjewXJUDYq}
}