ByteDance Seed team announced the release of the experimental diffusion language model Seed Diffusion Preview, marking a major technological breakthrough in the field of language models. The model aims to validate the feasibility of the discrete diffusion technology path as a foundational framework for next-generation language models through structured code generation experiments. Seed Diffusion Preview has achieved significant improvements in inference speed, reaching 2146 tokens per second, which is 5.4 times faster than similarly sized autoregressive models, while demonstrating performance comparable to autoregressive models on multiple code generation benchmark tests.

The release of Seed Diffusion Preview aims to address the limitations of autoregressive (AR) models in terms of inference speed and global control. Diffusion models have achieved remarkable success in continuous data areas such as image and video synthesis through a coarse-to-fine generation paradigm. However, applying diffusion models to discrete domains like natural language presents fundamental challenges, mainly due to the incompatibility between standard diffusion processes and discrete state spaces. Despite this, discrete diffusion models have shown great potential in scalability and performance.

微信截图_20250801103209.png

To address these challenges, Seed Diffusion Preview adopts four key technological innovations: two-stage curriculum learning, constrained order diffusion, same-policy learning, and block-level parallel diffusion sampling. The two-stage curriculum learning strategy includes masked diffusion training and edited diffusion training, aiming to enhance the model's local context completion capabilities and global code rationality evaluation. Constrained order diffusion guides the model to understand correct dependencies by introducing structured priors of code. Same-policy learning improves the model's inference speed by optimizing the number of generation steps. The block-level parallel diffusion sampling scheme enables efficient block-level inference while maintaining causal order.

Experimental results show that Seed Diffusion Preview achieves a code reasoning speed of 2146 tokens/s, which is 5.4 times faster than similarly sized autoregressive models. This speed improvement does not come at the cost of quality, as the model performs comparably to top autoregressive models on multiple industry benchmarks and even surpasses them in tasks such as code editing. This achievement not only demonstrates the potential of discrete diffusion models in accelerating inference but also highlights their application prospects in complex reasoning tasks.

Project page: https://seed.bytedance.com/seed_diffusion

Experience link: https://studio.seed.ai/exp/seed_diffusion