AIbase

gemma-Instruct-2b-Finetuning-on-alpaca

Public

This project demonstrates the steps required to fine-tune the Gemma model for tasks like code generation. We use qLora quantization to reduce memory usage and the SFTTrainer from the trl library for supervised fine-tuning.

Creat2024-06-30T14:59:33
Update2024-07-03T13:27:42
0
Stars
0
Stars Increase

Related projects