COSINE
Public[NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach'.
agnewscontrastive-learningdatasetfine-tuninglanguage-modellearning-with-noisy-labelspseudo-labelingrobertaroberta-modelself-training
Creat:2020-10-15T15:07:05
Update:2025-01-23T17:22:53
https://arxiv.org/pdf/2010.07835.pdf
204
Stars
0
Stars Increase