AIbase
Product LibraryTool NavigationMCP

peft-fine-tuning-LoRA

Public

A demonstration of Parameter-Efficient Fine-Tuning (PEFT) using Low-Rank Adaptation (LoRA) on transformer-based language models. This project provides a minimal example of fine-tuning a pre-trained model on the Shakespeare dataset using Hugging Face’s transformers, datasets, and peft libraries.

Creat2025-05-27T18:24:42
Update2025-06-01T21:55:37
https://github.com/Rishi-Kora/peft-fine-tuning-LoRA
2
Stars
0
Stars Increase