peft-fine-tuning-LoRA
PublicA demonstration of Parameter-Efficient Fine-Tuning (PEFT) using Low-Rank Adaptation (LoRA) on transformer-based language models. This project provides a minimal example of fine-tuning a pre-trained model on the Shakespeare dataset using Hugging Face’s transformers, datasets, and peft libraries.