With the explosion of AI large models and various complex algorithms, the challenge of energy efficiency at the hardware level has become increasingly prominent. Researchers from the School of Artificial Intelligence at Peking University, led by Researcher Sun Zhong, have made a major breakthrough in the field of high-performance computing chips. The team successfully developed a simulation computing chip specifically designed for "Non-negative Matrix Factorization" (NMF), providing a more efficient and low-power solution for processing massive data.

image.png

"Non-negative Matrix Factorization" is a core technology in fields such as image analysis, recommendation systems, and bioinformatics. However, traditional digital chips often face challenges such as high computational complexity and limited memory access when processing large-scale data in real time. To break through this dilemma, the Peking University team chose the simulation computing technology route, using physical laws to perform parallel computations directly, thus reducing latency and power consumption at the fundamental logic level.

Experimental test data shows that this new chip performs remarkably in typical application scenarios. Compared with currently mainstream advanced digital chips, its computing speed has been improved by about 12 times, while the improvement in energy efficiency has exceeded 228 times. This means that the chip can complete far more work than traditional hardware with extremely low energy consumption.

According to the information, this research result was officially published on January 19 in the international top journal "Nature Communications". In practical tests, the chip not only maintained high precision in image compression tasks but also saved about half of the storage space; in the training of commercial datasets for recommendation systems, its performance was also significantly better than traditional hardware. Researcher Sun Zhong stated that this work demonstrates the great potential of simulation computing in handling real-world complex data, and it is expected to be widely applied in areas such as real-time recommendations and high-definition image processing in the future.