Recently, at the 2025 I/O Developer Conference, Google announced that its latest feature, Gemini Live, is now available to both iOS and Android users. This revolutionary function can real-time recognize and respond to the content on users' phone cameras and screens, further enhancing the human-computer interaction experience.

Initially, Gemini Live was only available to Gemini Advanced users, but Google announced in April this year that it plans to expand its coverage. Now, this function has successfully been introduced to the iOS platform, becoming a new toy for all users. All you need to do is simply display a screenshot or point the camera at a specific object, and Gemini Live will provide users with accurate feedback. This interactive method breaks through the limitations of traditional text input, making AI more "intelligent" and truly able to understand user needs.

image.png

Imagine this: when you're visiting an aquarium, open your phone's camera, and Gemini Live can instantly identify those mysterious underwater creatures and share detailed information about them. Such application scenarios not only enrich user experiences but also make daily life full of more fun and knowledge.

In addition, with the continuous advancement of AI technology, the introduction of Gemini Live may change the way we interact with technology. In the future, whether it’s finding information, learning new knowledge, or solving problems in daily life, users will benefit from this powerful AI tool.

This update by Google not only demonstrates its leading position in the field of artificial intelligence but also brings a new experience to iPhone users. It can be foreseen that as technology develops, Gemini Live will play a greater role in daily life, helping users better interact with the world.