AIbase
Product LibraryTool NavigationMCP

Deep-Learning-ECE-7123-2025-Spring-Project-2

Public

This repository contains the code, model configurations, and report for fine-tuning a roberta-base model using Low-Rank Adaptation (LoRA)- a Parameter-Efficient Fine-Tuning (PEFT) method - on the AG News classification task. The goal was to achieve high test accuracy while keeping trainable parameters under 1 million.

Creat2025-04-22T05:24:58
Update2025-04-22T09:57:32
0
Stars
0
Stars Increase