peft-fine-tuning-LoRA
PublicA demonstration of Parameter-Efficient Fine-Tuning (PEFT) using Low-Rank Adaptation (LoRA) on transformer-based language models. This project provides a minimal example of fine-tuning a pre-trained model on the Shakespeare dataset using Hugging Face’s transformers, datasets, and peft libraries.
Creat:2025-05-27T18:24:42
Update:2025-06-01T21:55:37
https://github.com/Rishi-Kora/peft-fine-tuning-LoRA
2
Stars
0
Stars Increase