The competition for computing power among global tech giants is entering a feverish stage. According to insiders, Alphabet's Google is currently engaging in in-depth discussions with semiconductor solution provider Marvell Technology. The two companies plan to jointly develop two new customized AI chips, which are seen as a key step for Google to accelerate its efforts to reduce reliance on Nvidia hardware and strengthen its cloud infrastructure moat.

image.png

Key Strategy: The "Golden Partner" of TPU and the New Generation Processor

The focus of this collaboration lies in two specialized application-specific integrated circuits. One is a new memory processing unit (MPU), whose core mission is to work closely with Google's self-developed tensor processing unit (TPU) to optimize data transfer efficiency and break through bottlenecks in large-scale model training. The other is a new generation of TPU that is deeply optimized for advanced AI model operating environments.

According to the current schedule, the two companies could complete the design of the memory processing unit as early as next year, followed by the trial production phase. If everything goes smoothly, this self-developed combination will greatly enhance Google's autonomy in handling complex computing tasks.

Computing Independence: Reshaping the Cloud Business Competitive Landscape

For a long time, Nvidia's GPUs have almost dominated the AI computing market, but this has also brought high procurement costs and potential supply chain risks. For Google, continuously advancing TPU iterations and building it as a strong alternative to Nvidia products is not only a technical breakthrough, but also an inevitable choice in business strategy.