Google confirmed on December 2nd that its "AI mode" for mobile search has been fully deployed globally. Users can now directly interact with the Gemini model through the input box at the bottom without leaving the results page when using AI Overviews, enabling instant follow-up questions and multi-turn interactions—reducing the multiple jumps required by traditional searches to "one scroll, one click."
On the technical side, the AI mode uses a "query expansion" mechanism: the system first breaks down the user's question into multiple sub-topics, concurrently fetches knowledge graph, real-time sports, finance, and shopping data, then synthesizes a complete answer with citations; if the user continues to ask questions, the model will provide personalized responses based on context and search history, with conversation lengths up to three times that of traditional searches [180]. This feature is currently available in nearly 120 countries and regions, and supports text, voice, and image input on both iOS and Android platforms.
Google Search Center emphasized that the AI mode continues to use the existing quality ranking system, and web pages cited by the model are still counted as traffic in Search Console; early data shows that the click-through rate of result pages with AI Overviews has dropped by about 36%, but dwell time and conversion quality have significantly improved. For content creators, Google suggests continuing to follow basic SEO standards such as structured data and internal links, without additional optimization, they can still be "selected" and cited by the AI mode.
Industry insiders analyze that the built-in AI conversation has taken Google Search from "information retrieval" to "intelligent assistant," directly competing with ChatGPT Search and Perplexity; if in the future it adds booking and ticketing agent functions, the commercial flow of mobile search may be completely rewritten.




