Recently, the hi lab team of Xiaohongshu officially released its first open-source text large model — dots.llm1. This new model has attracted extensive attention in the industry due to its outstanding performance and massive number of parameters.
dots.llm1 is a large-scale Mixture of Experts (MoE) language model with an impressive 142 billion parameters, including 14 billion activated parameters. After being trained on 11.2 TB of high-quality data, this model's performance can rival Alibaba's Qwen2.5-72B. This means that dots.llm1 not only exhibits extremely high accuracy and fluency in text generation but also supports more complex natural language processing tasks.
It is worth noting that the pre-training process for this model did not use synthetic data; all data came from real-world scenarios with high-quality text. This gives dots.llm1 a distinct advantage in understanding the subtlety and naturalness of human language, providing users with a more realistic interactive experience.
Xiaohongshu’s decision to open-source this model marks its further expansion in the field of artificial intelligence, demonstrating its ambition in technological innovation. Open-sourcing not only helps enhance community participation and contributions but also provides developers with more opportunities to explore and apply this powerful tool.
As a platform centered on content sharing and social interaction, Xiaohongshu has been striving to improve user experience and technical capabilities. By launching dots.llm1, Xiaohongshu hopes to provide users with more intelligent services while encouraging more developers to participate in research and practice in the field of artificial intelligence.
In the future, we look forward to seeing dots.llm1 showcasing its potential in more fields, such as content creation, intelligent customer service, and more complex conversational systems. Without a doubt, Xiaohongshu is pushing the boundaries of artificial intelligence in its own way.