Wenhao Guan1,2, Zhikang Niu2,3, Ziyue Jiang4, Kaidi Wang1, Peijie Chen1, Qingyang Hong1, Lin Li1, Xie Chen2,3,
1Xiamen University, China
2Shanghai Innovation Institute, China
3Shanghai Jiao Tong University, China
4Zhejiang University, China
Model Download |
Quick Start |
Citation
📄 Paper Link (UniVoice)
Large language models (LLMs) have demonstrated promising performance in both automatic speech recognition (ASR) and text-to-speech (TTS) systems, gradually becoming the mainstream approach. However, most current approaches address these tasks separately rather than through a unified framework. This work aims to integrate these two tasks into one unified model. Although discrete speech tokenization enables joint modeling, its inherent information loss limits performance in both recognition and generation. In this work, we present UniVoice, a unified LLM framework through continuous representations that seamlessly integrates speech recognition and synthesis within a single model. Our approach combines the strengths of autoregressive modeling for speech recognition with flow matching for high-quality generation. To mitigate the inherent divergence between autoregressive and flow-matching models, we further design a dual attention mechanism, which switches between a causal mask for recognition and a bidirectional attention mask for synthesis. Furthermore, the proposed text-prefix-conditioned speech infilling method enables high-fidelity zero-shot voice cloning. Experimental results demonstrate that our method can achieve or exceed current single-task modeling methods in both ASR and zero-shot TTS tasks. This work explores new possibilities for end-to-end speech understanding and generation.
In this work, we use SmolLM2-360M as the LLM backbone.
| Model | Download |
|---|---|
| UniVoice-TTS | 🤗 Hugging Face |
| UniVoice-All | 🤗 Hugging Face |
On the basis of Python >= 3.10 environment, install the necessary dependencies by running the following command:
git clone https://github.com/gwh22/UniVoice
cd UniVoice
# We recommend using conda to create a new environment.
conda create -n UniVoice python=3.10
conda activate UniVoice
# install cuda >= 11.8
conda install cudatoolkit=11.8 -c nvidia
pip install -r requirements.txtcd UniVoice
# for ASR task
sh scripts/infer_asr.sh
# for TTS task
sh scripts/infer_tts.shcd UniVoice
sh scripts/train_all.shOur code is released under MIT License. If our work and codebase is useful for you, please cite as:
@article{guan2025univoice,
title={UniVoice: Unifying Autoregressive ASR and Flow-Matching based TTS with Large Language Models},
author={Guan, Wenhao and Niu, Zhikang and Jiang, Ziyue and Wang, Kaidi and Chen, Peijie and Hong, Qingyang and Li, Lin and Chen, Xie},
journal={arXiv preprint arXiv:2510.04593},
year={2025}
}
This codebase borrows from DiT, SmolLM2-360M, F5-TTS, Monoformer, LLaVA, and Transformers. Thanks for their great works.