Recently, Apple has officially gained broad access to Google's Gemini model, aiming to accelerate the development of its lightweight device-side artificial intelligence through advanced data distillation technology.

According to related reports, Apple currently has full access to the Gemini model within its data centers. The core of this strategic move lies in using the high-quality answers and logical reasoning chain records generated by Gemini as training data to "feed" Apple's self-developed small models. This "model distillation" approach, where a large model guides the training of a smaller one, enables the lightweight version to maintain efficient computing while possessing logical processing capabilities similar to those of top-tier large models.

Apple

Although Gemini was initially designed for chatbots and enterprise-level applications, and there is a difference in product logic compared to Apple's deep-level planning for Siri, this collaboration significantly fills the gap in Apple's access to high-quality synthetic data. At the same time, Apple has not abandoned its self-research path; its Apple Foundation Models team is simultaneously advancing the development of underlying models. It is expected that these next-generation AI features incorporating distillation technology will be showcased at the upcoming June Apple Developer Conference (WWDC).

This collaboration marks a shift in the AI industry from mere computational power competition to more efficient training strategy competition. By "paying for data," Apple is enhancing its edge in device-side computing power by absorbing the capabilities of top models. This not only reflects the ongoing competition and balance between general large models and private device-side AI among tech giants but also indicates that future device-side devices will have stronger local reasoning and complex task processing capabilities, further promoting the popularization of AI.