The researchers at Brown University have uncovered language security vulnerabilities in OpenAI's GPT-4. They managed to bypass the restrictions of GPT-4 by using less common languages, with a success rate of up to 79% when dealing with non-English languages. This finding has raised concerns about the safety of large language models and emphasized the need for future security research to include more non-English languages. Although this might provide ideas for cybercriminals, the researchers call for the necessity of revealing such vulnerabilities.