Multilingual-PR
PublicPhoneme Recognition using pre-trained models Wav2vec2, HuBERT and WavLM. Throughout this project, we compared specifically three different self-supervised models, Wav2vec (2019, 2020), HuBERT (2021) and WavLM (2022) pretrained on a corpus of English speech that we will use in various ways to perform phoneme recognition for different languages with a network trained with Connectionist Temporal Classification (CTC) algorithm.
asrcommon-voicedeep-learninghuggingfacehuggingface-transformersphone-recognitionself-supervised-learningspeech-processingspeech-recognitionwandb
Creat:2022-02-25T23:27:28
Update:2025-03-24T20:09:40
238
Stars
0
Stars Increase