Recently, researchers from Google and Johns Hopkins University have proposed a faster and more efficient text-to-image generation distillation method. Text-to-image diffusion models trained on large-scale data have dominated generation tasks due to their high quality and diverse outputs. This study introduces a single-stage distillation approach, starting from unconditional pre-training and culminating in a distilled conditional diffusion model. Experiments have shown that this innovative distillation technique outperforms previous methods in both visual quality and quantitative performance.