At the Beijing Zhiyuan Conference 2025 held today, the Beijing Zhiyuan Artificial Intelligence Research Institute officially released the "WuJie" series of large models, comprehensively showcasing its latest research results and strategic layout in the direction of physical artificial general intelligence (AGI).
The "WuJie" series aims to break through the boundary between virtual and real worlds and empower the physical world. It covers four cutting-edge models: the multimodal world model Emu3, the neuroscience model JianWei Brainμ, the embodied intelligent brain RoboBrain 2.0, and the microscopic life model OpenComplex2, constructing a full-chain AI system from world understanding, neural modeling to embodied control and life simulation.
Among them, Emu3 is an original multimodal generative model that unifies text, image, and video understanding and generation in a self-regressive manner without relying on diffusion architectures, providing modality-agnostic unified representation capabilities. Its core innovation lies in encoding multimodal data into homogeneous token sequences, possessing powerful cross-modal fusion capabilities.
Based on the Emu3 architecture, JianWei Brainμ has achieved tokenization and multimodal alignment of neuroscience signals (such as fMRI, EEG, two-photon imaging) for the first time. The model has completed pre-training on over 1 million neural signal units and is hailed as the "AlphaFold" of the neuroscience field, with broad potential in basic neuroscience research, brain disease diagnosis and treatment, brain-computer interfaces, and more.
Currently, Zhiyuan has collaborated with Peking University, Tsinghua University, Fudan University, the Beijing Institute of Life Sciences, and BrainCo to promote the landing of the "WuJie" models in scientific research and industry, further consolidating China's global competitiveness in the physical AGI path.