Tech giant Meta, in collaboration with a research team from The Chinese University of Hong Kong, has introduced the Multi-SpatialMLLM model, marking significant progress in the development of multimodal large language models (MLLMs), particularly in spatial understanding. This model integrates three key components—depth perception, visual correspondence, and dynamic perception—to break through the limitations of single-frame image analysis, providing robust support for more complex visual tasks.

image.png

In recent years, as industries such as robotics and autonomous driving demand enhanced spatial understanding capabilities, existing MLLMs have faced numerous challenges. Research shows that current models perform poorly on basic spatial reasoning tasks, such as accurately distinguishing left from right. This is mainly due to the lack of specialized training data and traditional methods' reliance on static perspectives without handling dynamic information.

To address these issues, Meta's FAIR team, in partnership with The Chinese University of Hong Kong, introduced the MultiSPA dataset. This dataset covers over 27 million samples across diverse 3D and 4D scenes, incorporating high-quality annotated data from Aria Digital Twin and Panoptic Studio, along with various task templates generated by GPT-4o.

Additionally, the research team designed five training tasks, including depth perception, camera movement perception, and object size perception, to enhance Multi-SpatialMLLM's capabilities in multi-frame spatial reasoning. After extensive testing, Multi-SpatialMLLM demonstrated excellent performance on the MultiSPA benchmark test, improving by an average of 36%. It achieved accuracy rates of 80-90% in qualitative tasks, significantly surpassing the baseline model’s 50%. Notably, it also achieved 18% accuracy in challenging tasks like predicting camera movement vectors.

On the BLINK benchmark test, Multi-SpatialMLLM achieved nearly 90% accuracy, improving by 26.4% compared to multiple proprietary systems. In standard visual question-answering (VQA) tests, the model maintained its original performance, demonstrating its strong general capabilities without overfitting to spatial reasoning tasks.

Key Points:

🌟 The Multi-SpatialMLLM model developed by Meta significantly enhances the spatial understanding capability of multimodal large language models.

📊 The new model overcomes the limitations of single-frame image analysis by integrating three key components: depth perception, visual correspondence, and dynamic perception.

🏆 Multi-SpatialMLLM excels in multiple benchmark tests, achieving significant improvements in accuracy and outperforming traditional models.