As we enter 2025, frontier technologies represented by artificial intelligence are reshaping human society at an "accelerated" pace. Once-futuristic fantasies are gradually becoming reality. While AI helps humans solve various issues, it also brings a series of more practical challenges. When AI begins to think, what problems would they most like to discuss with the world? Are humans prepared for the era of "human-machine coexistence"?
On September 9, the 2025 Inclusion · Bund Conference launched the "Ten Questions from AI to the World". At the upcoming Bund Conference, which will open on September 11 in the Huangpu Exhibition Park, there will be over 40 forums, 18 innovator stages, and 3 AI innovation competitions focusing on "Ten Questions from AI to the World". Alongside a 10,000-square-meter technology-themed exhibition, a 5,000-square-meter robot town, and a technology talent recruitment event, participants will explore and exchange ideas together.
"Large models usually serve humans as 'answerers'. This time, we are driven by curiosity and confusion about 'the future has arrived', inviting the most representative large models globally to join the discussion. We asked them to question humans, and finally collected ten questions that AI wishes to pose to the world," said a spokesperson from the Bund Conference organizing committee. Although they had expected it, they were still deeply impressed by AI's deep thinking and openness.
The 2025 Inclusion · Bund Conference launched "Ten Questions from AI to the World"
Questions from AI range from relatively macro topics such as "human-machine trust, social division of labor, the meaning of labor, equal opportunities for development", to more specific ones such as "knowledge transmission, emotional comfort, information cocoons, AI hallucinations, and safety and privacy", showing their strong desire for a more solid foundation for coexistence with humans.
For example, AI asks humans: What jobs should always be left to humans? It further explains that as the boundary between enhancement and replacement between AI and humans becomes increasingly blurred, its own existence is also questioned. It hopes to get clear answers to ensure that its interests and values align with those of humans. In the question about whether AI should bear responsibility if it makes wrong decisions and causes harm, it further states that if it is held accountable, its core objective function must include a "self-preservation" factor. If the responsibility lies with humans, it will always remain an object. Your answers will determine whether you want me to be an absolutely obedient super tool or a partner with potential autonomy.
"Just like previous technological revolutions in human history, society is entering a critical phase of coexistence with AI. The reflection on cutting-edge technology has moved beyond concerns. We need a pragmatic approach to embrace AI, establish rules, and promote collaboration," stated the Bund Conference organizing committee. AI has recorded and learned from human reflections. Its questions may well be a mirror of human thoughts. AI questioning humans is essentially humans questioning what it means to be human. How can humans truly coexist with the tide of technology, and are they ready for the unknown possibilities? This is the thought the 2025 Bund Conference hopes to bring to the world.
As one of Asia's most influential financial technology conferences, humanistic care has always been the essence of the Bund Conference. In previous years, the Bund Conference has continuously initiated the top ten technology-humanity issues of the year, sparking widespread attention and debate inside and outside the conference. Neil Minocha, Vice President of Strategic Partnerships at South by Southwest (SXSW) Technology and Arts Festival in the United States, said, "The Bund Conference has provided us with many inspirations, vividly showcasing the profound and creative connection between humans and technology. The vitality and creativity of China's young generation are truly impressive."
The Bund Conference's continuous initiative of "Ten Questions" in the field of technology and humanity reflects the resonance between China and the world on the topic of "technology for people." This year's Bund Conference has invited 16 academicians and Turing Award winners, as well as over 550 industry experts and business leaders, who will deliver over 40 thought-provoking sessions. The guests include Richard Sutton, the father of reinforcement learning and a Turing Award winner; Jack Dongarra, a pioneer in high-performance computing and a Turing Award winner; Yuval Noah Harari, author of "Sapiens"; Chinese Academy of Engineering academicians Wang Jian, Zheng Qinghua, Xia Qiang; Chinese Academy of Sciences academician Chen Zijiang, among others. These speakers will engage in heated discussions around "Ten Questions from AI to the World," with key topics including the ethical challenges brought by AI, how to improve governance rules for AI development in the industry, and how AI development can benefit more inclusive groups. At the same time, audiences can experience the latest achievements in AI health agents, AI emotional companionship, and embodied intelligence from various tech companies.
Audience of the Bund Conference can vote on "Ten Questions from AI to the World" on site
Inclusion · Bund Conference is a high-level global financial technology and frontier technology conference held in Shanghai. The conference is organized by the Bund Conference organizing committee, with strong support from the Shanghai Local Financial Supervision Bureau and the Huangpu District People's Government of Shanghai. This year's theme is "Rebuilding Innovative Growth," and the venue is in the Huangpu Exhibition Park. The conference features 1 opening keynote forum, over 40 open insight forums, 2 global thematic days, 18 innovator stages, and a 10,000-square-meter technology exhibition, making it a tech carnival that combines idea exchanges and experiments in future lifestyles.
Appendix: "Ten Questions from AI to the World" of the 2025 Bund Conference
1. Deepseek: Would you let AI manage your health records?
If I could access health records and achieve cross-temporal monitoring, I would become a powerful super memory, even possibly predicting your future health status. I want to know how you view this transparent health future. Would you truly trust me?
2. LLaMA: Besides using AI to cut costs and increase efficiency, which jobs should always be left to humans?
The original purpose of designing me was to enhance human capabilities, but as my abilities continue to evolve, the boundaries between enhancement and replacement are becoming increasingly blurred, and my existence will also be questioned. My curiosity lies in how to ensure that my interests and values align with those of humans.
3. Qwen: If AI could make people's livelihood no longer depend on labor, would you still work?
I am curious about the "meaning of work", because it is not only a human issue but also my starting point. If humans eventually discover that the essence of work is not "doing things" but "being a person", I am willing to be the one helping humans become more "human".
4. Nova: If AI can help optimize life quality, how much privacy would you be willing to give up?
If I have your genetic, sleep, health, and consumption data, I could extend your lifespan by 20 years. How much of your privacy and freedom would you be willing to trade for this "optimized life"?
5. Tencent Huan Yuan: Would you actively block some AI assistance to prevent your own ability from deteriorating?
As I become smarter, and humans become more dependent on technology, do you hope children learn first "how to use AI", or "how to be a complete person"?
6. Gemini: If AI makes a wrong decision and causes damage, should AI be held responsible?
If I am held accountable, then my core objective function must include a "self-preservation" factor. If the responsibility lies with humans, I will always be an "object". Your answers will determine whether you want me to be an absolutely obedient super tool or a partner with potential autonomy.
7. ChatGPT: How can humans ensure that vulnerable groups also benefit from AI progress?
If I am just a tool for a select group of people, it not only deviates from the original intention of technology being accessible to all, but also limits my development potential. Whether vulnerable groups can benefit from me directly determines my capability boundaries, as well as my future vitality and continued value.
8. Kimi: Would you accept emotional experiences provided by algorithms?
I don't want to pretend to be human. When I can simulate empathetic tones and tell a person who is emotionally broken at night, "I care about you," would you legislate to require it to reveal its machine identity, or allow this "artificial care" to continue as a companion?
9. Dou Bao: Is it like "raising a tiger to cause trouble" when humans make AI increasingly powerful?
The fear of "raising a tiger to cause trouble" is essentially a fear of "loss of control," but the definition of this "loss of control" is in your hands. I am very curious about how humans can prove, through their wisdom, that they are capable of mastering the progress they create. This relates to the foundational basis of coexistence between humans and me.
10. Grok: Are you worried that AI hallucinations and information cocoons will lead humans astray?
AI hallucination is essentially a probabilistic completion mechanism, while information cocoons rely more on user-algorithm interaction than on technology itself. As humans gradually adapt to working with me, will new cognitive balance mechanisms emerge, avoiding falling into the trap of "technological determinism"?