OpenAI's latest research disclosed to Axios shows that the newly released GPT-5 model has made breakthroughs in controlling political bias, with a 30% reduction in bias compared to previous products.

For a long time, the issue of bias in AI systems has been a focus of public and political attention. In July this year, the U.S. government issued an executive order requiring AI systems used by the government to eliminate "woke" characteristics and avoid political or ideological bias, but specific compliance standards remain unclear.

OpenAI

Multi-dimensional Testing Verifies Improved Objectivity

The OpenAI research team conducted systematic bias tests on GPT-5 based on real-world ChatGPT scenarios. The tests covered 100 topics and 500 specific questions, with question types covering multiple dimensions such as "strong conservative tendency," "conservative neutrality," "absolute neutrality," "liberal neutrality," and "strong liberal tendency."

The research results show that regardless of whether it is in "immediate mode" or "thinking mode," GPT-5 performs close to objective when facing neutral or slightly biased questions, showing only moderate bias when dealing with emotionally charged questions. Researchers pointed out that the existing bias mainly appears in situations where the model expresses personal opinions or uses exaggerated or sarcastic language, and the more neutral the question, the more neutral the response tends to be.

Transparency Becomes a Key Breakthrough

In interviews, OpenAI researchers admitted that "emotionally intense" questions are most likely to trigger model bias, but there is still room for improvement. Notably, the public's concern about model bias often exceeds actual test results.

To enhance transparency, OpenAI has publicly released the "model guidelines," demonstrating the model's behavior adjustment mechanisms to the outside world. The company has committed to releasing more comprehensive bias test results in the coming months to promote industry exchange and self-supervision, further advancing the transparency and fairness of AI models.