Weibo has officially launched its self-developed open-source large model Vibe Thinker. With 1.5 billion parameters, it outperformed DeepSeek R1, which has 671 billion parameters, in international top-tier mathematical competition benchmark tests. The accuracy is leading, and the cost for a single "post-training" is only $7,800, which is tens of times lower than models such as DeepSeek-R1 and MiniMax-M1.

Vibe Thinker uses a lightweight MoE architecture and multi-round knowledge distillation. According to the official statement, it can complete efficient fine-tuning with mathematical corpora under 5GB, supports one-click download on Hugging Face, and has commercial licensing. Weibo's technical team revealed that the model's average score in competition question banks such as AIME 2025 and HMMT improved by 3.4% compared to R1, and the inference latency was reduced by 42%, making it suitable for real-time scenarios in education and finance.

The open-source version provides PyTorch and GGUF formats, and can run on a single RTX4090. Weibo also opens up training scripts and data ratio plans. It plans to launch the Vibe Thinker-Math specialized mathematical enhancement version in December, and will jointly hold a "Lightweight Mathematics Challenge" with universities to promote the popularization of low-cost and high-precision AI.