Translated data: According to reports, University of Chicago professor Ben Zhao and his team have developed a new tool called Nightshade, which allows artists to make imperceptible modifications to image pixels before uploading their works. This method can cause confusion in the output results when AI companies use the art pieces for training models. The purpose of Nightshade is to help artists fight back against companies that use their works to train AI models without authorization. The report suggests that this approach of attacking AI models by disrupting training data can be a powerful weapon for artists to counter copyright infringement, and may compel AI companies to respect artists' rights more.