Recently, researchers from the University of Science and Technology of China and Tencent's YouTu Lab have introduced the Woodpecker framework, designed to address the issue of visual hallucinations in image description tasks by multi-modal large language models. Woodpecker can extract key concepts from model outputs, construct validation questions, utilize visual models for knowledge verification, and generate visual assertions, thereby producing corrected descriptions. Experiments show that Woodpecker significantly enhances the perception capabilities of various multi-modal models in terms of object existence, quantity, attributes, and more, reducing visual hallucination issues. The researchers also provide an online demo for users to experience the hallucination correction effects of Woodpecker. This framework offers a new approach to improving the reliability of multi-modal models.