S2S-Arena, Evaluating Speech2Speech Protocols on Instruction Following with Paralinguistic Information
Abstract
The rapid development of large language models (LLMs) has brought significant attention to speech models, particularly recent progress in speech2speech protocols supporting speech input and output. However, the existing benchmarks adopt automatic text-based evaluators for evaluating the instruction following ability of these models lack consideration for paralinguistic information in both speech understanding and generation. To address these issues, we introduce S2S-Arena, a novel arena-style S2S benchmark that evaluates instruction-following capabilities with paralinguistic information in both speech-in and speech-out across real-world tasks. We design 154 samples that fused TTS and live recordings in four domains with 21 tasks and manually evaluate existing popular speech models in an arena-style manner. The experimental results show that: (1) in addition to the superior performance of GPT-4o, the speech model of cascaded ASR, LLM, and TTS outperforms the jointly trained model after text-speech alignment in speech2speech protocols; (2) considering paralinguistic information, the knowledgeability of the speech model mainly depends on the LLM backbone, and the multilingual support of that is limited by the speech module; (3) excellent speech models can already understand the paralinguistic information in speech input, but generating appropriate audio with paralinguistic information is still a challenge.
Community
In this paper, we propose the S2S-Arena for benchmarking speech models, a speech2speech evaluation protocol on instruction following ability with paralinguistic information. We collect 154 TTS and human-recording samples from four domains (Education, Social Companionship, Entertainment, and Medical Consultation) to compare existing speech models (GPT-4o-realtime, FunaudioLLM, speechGPT, etc.). We also give four findings based on our arena-style comparison. Everyone can try it in our huggingface space.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- InSerter: Speech Instruction Following with Unsupervised Interleaved Pre-training (2025)
- LLMVoX: Autoregressive Streaming Text-to-Speech Model for Any LLM (2025)
- Balancing Speech Understanding and Generation Using Continual Pre-training for Codec-based Speech LLM (2025)
- URO-Bench: A Comprehensive Benchmark for End-to-End Spoken Dialogue Models (2025)
- Does Your Voice Assistant Remember? Analyzing Conversational Context Recall and Utilization in Voice Interaction Models (2025)
- Step-Audio: Unified Understanding and Generation in Intelligent Speech Interaction (2025)
- Baichuan-Audio: A Unified Framework for End-to-End Speech Interaction (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper