Google has recently launched a major update for its AI assistant Gemini. Now, when users ask questions involving spatial structures or physical laws, Gemini no longer provides only text and images, but can directly generate interactive 3D models and dynamic simulation scenarios that can be rotated and scaled.

This new feature allows users to understand complex concepts through intuitive visualizations. For example, users can ask Gemini to demonstrate "the moon's orbit around the Earth" or "a double pendulum system." The system will generate a 3D scene, and users can adjust variables (such as motion speed, gravity parameters, etc.) using sliders, and observe changes in the physical process in real time.

image.png

3D Interaction and Parameter Control

Differing from previous flat interactions, this upgrade offers a deeper sense of involvement:

  • Multi-dimensional Rotation: Supports 360-degree observation of 3D model details without blind spots.

  • Real-time Variables: Users can manually control simulation parameters via switches or sliders to explore physical results under different conditions.

  • Intuitive Presentation: Complex concepts (such as the Doppler effect) can be presented intuitively through dynamic 3D waveforms, greatly reducing the learning barrier.

Currently, all Gemini Pro users can experience this feature. Just click the "Show me the visualization" button at the bottom of the interface after asking a question to activate it. This move also marks a new phase in the competition between Google, Anthropic, and OpenAI in the "visualized answers" field, as AI is evolving from pure text interaction toward multimodal, highly interactive spatial intelligence.