At the "Zhuijin Jinling · AI Open Source Talent Summit and Moba Community Developer Conference" held on March 22, the Moba Community jointly with authoritative institutions such as the CCF Intelligent Robotics Committee and the Ministry of Industry and Information Technology's Equipment Digital Twin Technology Key Laboratory officially released the EAI-100 (Embodied Artificial Intelligence 100) list of 100 representative achievements and figures in embodied intelligence for 2025. Ant Lingbo Technology was simultaneously selected into two core lists: "Top 10 Breakthroughs of the Year" and "Pioneer Figures 20".
The EAI-100 is a systematic annual comprehensive evaluation in the field of embodied intelligence, with the core criteria being real impact, long-term value, and directional contributions. It focuses on whether the relevant individuals and works have made substantial progress in the development of embodied intelligence at the research paradigm, system capabilities, or industrial practice levels. This year's list also includes representative individuals and achievements from universities such as Tsinghua University, Peking University, and the University of Hong Kong, as well as companies like Yuzhu Technology, Galaxy General, Xinghai Map, and Zhiyuan Robotics.

(Caption: Shen Yujun of Ant Lingbo Technology was selected into the EAI-100 Pioneer Figures list, along with Wang Xingxing of Yuzhu Technology and Wang He of Peking University)
LingBot-VLA: A "General Brain" for Embodied Intelligence Across Modalities and Tasks
Ant Lingbo Technology's self-developed embodied foundation model LingBot-VLA was selected into the "Top 10 Breakthroughs of the Year." The model achieves cross-modal and cross-task generalization capabilities for real robot operation scenarios, significantly reducing post-training costs, and promoting the engineering implementation of "one brain for multiple machines."
LingBot-VLA is pre-trained on real-world operational data covering nine mainstream dual-arm robot configurations, totaling over 20,000 hours. It collaborates with Ant Lingbo's self-developed high-precision spatial perception model LingBot-Depth to further enhance operational accuracy. The model requires only 80 demonstration data to complete high-quality task migration, and combined with deep optimization of the underlying code library, the training throughput reaches 1.5 to 2.8 times that of mainstream frameworks, achieving dual reductions in data and computing costs. Ant Lingbo has open-sourced LingBot-VLA and its corresponding post-training toolchain, enabling developers to quickly adapt to their own scenarios and greatly improving the practicality of the model.
Chief Scientist Shen Yujun was Selected into the "Pioneer Figures 20"
Ant Lingbo Technology's Chief Scientist Shen Yujun was also selected into the "Embodied Intelligence - Pioneer Figures 20" list. This list highlights pioneering individuals who have had a sustained and profound impact in the field of embodied intelligence, particularly recognizing their role as "pathfinders" during key stages of the field's development.
Dr. Shen Yujun graduated from the Chinese University of Hong Kong and has long been dedicated to research in computer vision and generative models, publishing more than 100 papers in top international conferences and journals such as CVPR and TPAMI, with over 10,000 citations. As the chief technical officer of Ant Lingbo Technology, he led his team to release and open-source the spatial perception model LingBot-Depth, the foundation model LingBot-VLA, the world model LingBot-World, and the video-action model LingBot-VA in January of this year, building a complete technical matrix from spatial perception to intelligent decision-making, from foundational capabilities to world modeling, and driving embodied intelligence from the laboratory to large-scale application.
