AIbase
Product LibraryTool NavigationMCP

Google-Competition-Gemma-2

Public

Fine-tuning of Gemma 2 model in Google Competition using a dataset of Chinese poetry. The goal is to adapt the model to generate Chinese poetry in a classical style by training it on a subset of poems. The fine-tuning process leverages LoRA (Low-Rank Adaptation) for efficient model adaptation.

Creat2025-02-12T06:55:29
Update2025-06-17T20:42:16
3
Stars
0
Stars Increase