BLIP-Hugging-Face-Quickstart-Finetune-Lora
PublicA modular, easy-to-use framework for fine-tuning BLIP-1 on custom image captioning tasks using LoRA and Hugging Face Transformers. Includes data preprocessing, training scripts, and inference demos — with custom patching on the vision backbone. Ideal for researchers, engineers, and AI enthusiasts building lightweight captioning systems.