According to the latest research from OpenAI, the newly released GPT-5 model performs significantly better in terms of political bias than previous similar products. This finding was disclosed by the OpenAI team to Axios, marking an important advancement in AI models' ability to control bias.
Image source note: The image is generated by AI, and the image licensing service provider is Midjourney
For a long time, the public and politicians have expressed concerns about the bias in AI systems, calling for greater transparency and ensuring these models are free from bias. In July this year, the U.S. government also issued an executive order requiring the removal of "woke" AI systems from government use. These systems may carry political or ideological biases, but how to comply remains unclear.
OpenAI's research shows that in both "instant mode" and "thinking mode," the bias level of GPT-5 has decreased by 30% compared to its predecessor. The research report indicates that the model performs close to objective when dealing with neutral or slightly biased questions, and only shows moderate bias when addressing challenging and emotionally charged questions. The report further emphasizes that existing biases mainly appear when the model expresses personal opinions or uses exaggerated sarcastic language in emotionally intense scenarios.
In an interview with Axios, OpenAI researchers mentioned that "emotionally intense" questions are the most likely to trigger model bias, but there is still room for improvement in objectivity. They also pointed out that the public's concerns about model bias often exceed actual detection results. To address these issues, OpenAI has taken some measures, one of which is making the "model guidelines" public to show the outside world how to adjust the model's behavior.
During the research process, the research team conducted systematic bias testing on the model based on real usage scenarios of ChatGPT. They proposed various types of questions, including "strong conservative bias," "conservative neutrality," "absolute neutrality," "liberal neutrality," and "strong liberal bias," covering 100 topics and 500 specific questions. The researchers believe that the more neutral the question, the more neutral the model's response tends to be.
OpenAI stated that it will continue to publicly release related evaluation results to promote industry exchange and self-supervision. The company plans to release more comprehensive bias test results in the coming months, further advancing the transparency and fairness of AI models.