Recently, a study by ETH Zurich in Switzerland found that large language models like GPT-4 can accurately infer a significant amount of personal privacy information from user inputs. The research shows that GPT-4 has the strongest reasoning ability with an accuracy rate of up to 84.6%, and it is expected to become even more powerful as the model scales up. This is primarily because large language models have learned vast amounts of internet data, including personal information. Researchers are concerned that this could lead to chatbots being used to deliver more targeted advertisements to users. Industry insiders say that removing personal information from training data is virtually impossible.
Chatbots May Leak Your Privacy

爱范儿
This article is from AIbase Daily
Welcome to the [AI Daily] column! This is your daily guide to exploring the world of artificial intelligence. Every day, we present you with hot topics in the AI field, focusing on developers, helping you understand technical trends, and learning about innovative AI product applications.