Orthogonal Finetuning (OFT)
OFT effectively stabilizes text-to-image diffusion models during fine-tuning
CommonProductImageText-to-Image GenerationImage Synthesis
The study 'Controlling Text-to-Image Diffusion' explores how to effectively guide or control powerful text-to-image generation models for various downstream tasks. The orthogonal finetuning (OFT) method is proposed, which maintains the model's generative ability. OFT preserves the hypershell energy between neurons, preventing the model from collapsing. The authors consider two important fine-tuning tasks: subject-driven generation and controllable generation. Results show that the OFT method outperforms existing methods in terms of generation quality and convergence speed.
Orthogonal Finetuning (OFT) Visit Over Time
Monthly Visits
No Data
Bounce Rate
No Data
Page per Visit
No Data
Visit Duration
No Data
Orthogonal Finetuning (OFT) Visit Trend
No Visits Data
Orthogonal Finetuning (OFT) Visit Geography
No Geography Data
Orthogonal Finetuning (OFT) Traffic Sources
No Traffic Sources Data
Orthogonal Finetuning (OFT) Alternatives

FLUX.1-dev — A text-to-image generation model with 1.2 billion parameters
•Image Generation•AI Art
606

Meissonic — High-resolution text-to-image synthesis model
•Text-to-Image Synthesis•High-Resolution
252