This paper introduces a diffusion model based on perceptual loss, which improves sample quality by directly incorporating perceptual loss into the diffusion training process. For conditional generation, this method only improves sample quality without affecting the conditional input, thus not sacrificing sample diversity. For unconditional generation, this method can also improve sample quality. The paper details the principles and experimental results of the method.