Recently, a study by ETH Zurich in Switzerland found that large language models like GPT-4 can accurately infer a significant amount of personal privacy information from user inputs. The research shows that GPT-4 has the strongest reasoning ability with an accuracy rate of up to 84.6%, and it is expected to become even more powerful as the model scales up. This is primarily because large language models have learned vast amounts of internet data, including personal information. Researchers are concerned that this could lead to chatbots being used to deliver more targeted advertisements to users. Industry insiders say that removing personal information from training data is virtually impossible.