No description available
Unbabel
M-Prometheus is an open-source LLM evaluation model capable of natively supporting multilingual output assessment. Trained on 480,000 multilingual direct evaluation and pairwise comparison data with long-text feedback.
M-Prometheus is an open-source LLM evaluation model capable of natively supporting multilingual output evaluation. It was trained on 480,000 multilingual direct evaluation and paired comparison data points.
M-Prometheus is an open-source LLM evaluation model that natively supports multilingual output evaluation. It is trained on 480,000 multilingual direct evaluation and pairwise comparison data points with detailed feedback.
A 7-billion-parameter multilingual translation model based on the Mistral architecture, supporting translation-related tasks in 10 languages
arzeth
TowerInstruct-7B-v0.2 is a multilingual instruction fine-tuned model developed by Unbabel, based on the Llama 2 architecture, supporting translation tasks in 10 languages.
TowerInstruct-7B-v0.2 is a 7-billion-parameter multilingual large language model focused on translation-related tasks, supporting 10 languages.
TowerInstruct-13B is a 13-billion-parameter language model obtained by fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset, focusing on translation-related tasks
TowerBase-13B is a multilingual large language model based on continued pre-training of Llama 2, supporting 10 languages and suitable for translation and related tasks.
TowerInstruct-7B is a language model obtained by fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset, specifically designed to handle various translation-related tasks.
TowerBase-7B is a language model obtained by further pretraining Llama 2 on 20 billion monolingual and bilingual datasets across ten different languages, supporting 10 languages while maintaining English capabilities.
COMET-22 is a machine translation evaluation model developed by Unbabel, based on the XLM-R architecture, supporting quality assessment for multiple language pairs.
COMETKiwi is a model for machine translation quality estimation, capable of outputting quality scores based on source text and translated text.
A grammar error correction model based on the T5-small architecture, designed to automatically detect and correct grammatical errors in English text.
mMiniLM-L12xH384 XLM-R is a lightweight multilingual pre-trained model based on the MiniLMv2 architecture, compressed from the traditional XLM-RoBERTa model through relational distillation technology.