gemma-Instruct-2b-Finetuning-on-alpaca
PublicThis project demonstrates the steps required to fine-tune the Gemma model for tasks like code generation. We use qLora quantization to reduce memory usage and the SFTTrainer from the trl library for supervised fine-tuning.
作成時間:2024-06-30T14:59:33
更新時間:2024-07-03T13:27:42
0
Stars
0
Stars Increase