In the rapid development of artificial intelligence, how to enhance the retrieval and reasoning capabilities of large language models (LLMs) has become a popular research topic. Recently, the Tongyi Lab at Alibaba proposed a new framework called "ZeroSearch," which enables large language models to simulate search engines on their own. This allows them to improve their reasoning capabilities without the need for an actual search engine.
Traditional search engines are powerful, but during the training of large models, their output quality is often unpredictable, leading to noise and instability in the training process. Additionally, relying on real search engine APIs can incur huge costs, making large-scale reinforcement learning training impractical. The emergence of ZeroSearch solves these problems. This framework simulates search environments and uses progressive denoising training, allowing large models to learn without interacting with real search engines.
The core of ZeroSearch is to fine-tune large models using reinforcement learning (RL) and a small amount of labeled data to enable them to generate useful and noisy documents. During the training process, the model learns to produce content similar to that of real search engines while adapting to the generation of documents of different qualities. This dynamic adjustment capability allows the model to quickly adapt and find balance when facing more complex retrieval tasks.
Moreover, ZeroSearch adopts a curriculum learning approach. At the beginning of training, the model receives high-quality documents, and as training progresses, it gradually encounters documents mixed with noise. This strategy of gradually increasing difficulty not only enhances the model's reasoning capabilities but also improves the stability and effectiveness of training. After training, the model can find the best retrieval strategies between high-quality and low-quality documents.
Research shows that ZeroSearch performs excellently on multiple question-answering datasets, particularly excelling in single-hop and multi-hop question-answering tasks compared to traditional methods. This means that ZeroSearch not only provides accurate answers to simple questions but can also handle more complex query tasks.
ZeroSearch offers a new approach for large models to self-learn, eliminating dependence on search engines and making large-scale reinforcement learning training more economically feasible. In the future, ZeroSearch is expected to play a larger role in enhancing the retrieval capabilities and application scope of LLMs.