Apple is planning to deeply integrate artificial intelligence into the core architecture of the iPhone camera in its upcoming iOS 27 operating system, achieving full integration of visual intelligence features through a new "Siri Mode." According to insiders, the "Visual Intelligence" that was previously tied to the physical "Camera Control" button will now directly enter the camera app interface as an independent switch option, alongside "Photos," "Videos," and "Portrait." This move aims to increase the prominence of AI features, allowing users to more intuitively access the core capabilities of Apple Intelligence.

In terms of technical implementation and interaction design, the new mode will introduce a shutter button with the visual style of Apple Intelligence, replacing the traditional white capture button. Functionally, this mode will not only support calling ChatGPT for scene-based Q&A and performing Google reverse image search but also enhance its perception and processing capabilities in the real world: including extracting event information from posters and automatically creating calendar events, accurately identifying plant and animal species, and quickly retrieving menus and contact information of commercial institutions.
Additionally, Apple is expanding its functional boundaries through self-developed algorithms. iOS 27 is expected to add a new feature for automatically scanning and recording nutritional labels on food packaging, as well as supporting direct identification and saving of contact information from the camera view. These upgrades mark Apple's accelerated efforts to transform multi-modal AI capabilities into native system experiences. According to tradition, iOS 27 will be unveiled at this year's Worldwide Developers Conference (WWDC) in June, and it will be released alongside new devices in the fall. This is seen as a key strategic move for Apple to catch up with competitors in the mobile AI field.




