As artificial intelligence (AI) becomes increasingly integrated into our daily lives, an important question arises: who should be held responsible when AI makes mistakes? AI lacks consciousness and free will, which makes it difficult to directly blame the system itself for its errors. Recently, Dr. Hyungrae Noh, an assistant professor at Pusan National University, has conducted in-depth research on this issue and proposed a distributed model of AI responsibility.
AI systems typically operate in a semi-autonomous manner through complex and opaque processes. Therefore, although these systems are developed and used by humans, their potential harms are often unpredictable. This makes traditional ethical frameworks struggle to explain who should be held accountable when AI causes harm, resulting in what is known as a "responsibility gap." Professor Noh's research points out that traditional moral frameworks often rely on human psychological capacities, such as intention and free will, making it difficult to clearly assign responsibility to AI systems or their developers.
In his research, Professor Noh states that the reason AI systems cannot be morally held accountable is that they lack the ability and awareness to understand their own actions. These systems do not experience subjective experiences, lack intent and decision-making capabilities, and are often unable to explain their behaviors. Therefore, attributing responsibility to these systems is unreasonable.
The study also explores Luciano Floridi's non-anthropocentric theory of responsibility. This theory advocates that human developers, users, and programmers have a responsibility to monitor and adjust AI systems to prevent harm and, if necessary, disconnect or delete them. Additionally, if AI systems possess a certain level of autonomy, this responsibility should also extend to the systems themselves.
Professor Noh concludes that it is essential to recognize a distributed model of responsibility, which means that both human stakeholders and AI agents share the responsibility of addressing harms caused by AI, even if these harms were unforeseen or not explicitly intended. This way of thinking will help correct errors promptly, prevent future damages, and promote the ethical design and use of AI systems.
Key Points:
✅ AI systems lack consciousness and free will, making it difficult to hold them directly accountable.
🔍 The responsibility gap makes traditional ethical frameworks unable to explain who is responsible for harm caused by AI.
🤝 The distributed model of responsibility emphasizes that humans and AI share the responsibility of preventing harm.




