The ChinaZ.com report states that researchers from Fudan University have introduced SpeechGPT-Gen, an 8B parameter large-scale language model for speech that excels in the efficiency of semantic and perceptual information modeling. This model demonstrates outstanding performance and scalability in various applications such as zero-shot text-to-speech, voice conversion, and voice dialogue. By adopting the Chain of Information Generation (CoIG) method, it addresses the inefficiency issues found in traditional speech generation methods. Additionally, the model enhances its efficiency and output quality by using semantic information as a prior in flow matching.