Yesterday, the Tongyi team of Alibaba officially launched the Qwen-TTS model. This text-to-speech (TTS) model has sparked industry discussions with its extremely realistic sound and support for multiple dialects. The AIbase editorial team has compiled the latest information, providing an in-depth analysis of this speech synthesis tool available through the Qwen API and its groundbreaking significance in the field of AI voice technology.

Qwen-TTS: Ultra-realistic Speech Synthesis

Qwen-TTS is the latest text-to-speech model developed by the Tongyi team based on a large-scale speech dataset. Through training with millions of hours of speech data, the generated voice has reached an extremely high level in naturalness, intonation, rhythm, and emotional expression. Users can experience voice effects close to those of real people through the Qwen API, suitable for applications such as education, entertainment, and intelligent customer service.

Image source note: The image was generated by AI.

Support for Multiple Dialects and Bilingual Voices

One of the highlights of Qwen-TTS is its diverse language support. The model not only supports standard Mandarin but also covers three Chinese dialects: Beijing dialect, Shanghai dialect, and Sichuan dialect, offering users a more region-specific voice experience. In addition, Qwen-TTS provides seven bilingual Chinese-English voices, including Cherry, Ethan, Chelsie, Serena, Dylan, Jada, and Sunny. Each voice has been carefully tuned to ensure authentic pronunciation and expressive delivery. This multi-dialect and multi-voice design greatly expands the application scenarios of the model, meeting the needs of users from different cultural backgrounds.

Technical Breakthroughs: Streaming Output and Emotional Adjustment

Qwen-TTS supports streaming audio output, allowing dynamic adjustments to tone, speed, and emotional changes based on input text. The generated voice is not only realistic but also conveys subtle emotional expressions. Compared to traditional TTS models, Qwen-TTS is almost indistinguishable in terms of realism and expressiveness, even reaching industry-leading levels in specific evaluations (such as SeedTTS-Eval). This is attributed to the extensive corpus used for training and the continuous optimization of speech synthesis algorithms by the Tongyi team.

Industry Impact and Future Prospects

The release of Qwen-TTS further promotes the popularization and application of speech synthesis technology. Whether for film dubbing, virtual anchors, or intelligent assistants, Qwen-TTS demonstrates great potential in providing more natural interaction experiences. AIbase believes that as the gap in realism of speech synthesis technology continues to narrow, innovations in dialect support and personalized voice tones will become key factors in future competition. By opening up Qwen-TTS through an API, the Tongyi team not only lowers the usage barrier but also provides developers with more creative space.