Recently, Amap and the C-end application team of Alibaba Qwen have launched AGenUI for AI Agent developers - this is the industry's first end-to-cloud native A2UI open-source framework that covers iOS, Android, and HarmonyOS. After integrating the SDK, developers can directly render the output of the Agent into interactive native cards, without having to write UI code separately for different platforms.

AGenUI is built based on Google's latest open A2UI protocol. The A2UI protocol previously opened by Google defines a standard way for "how models describe interfaces." AGenUI further complements the "how these descriptions run on mobile devices" end-side native rendering capabilities. Together, they drive AI applications from "text-based interaction" to "generative UI interaction."
AGenUI adopts an end-to-cloud architecture. On the cloud side, Agent Skill generates AI-native A2UI JSON, reducing large model token consumption and output uncertainty; on the end side, it relies on a cross-platform C++ Core to handle protocol parsing, state management, and layout calculation uniformly, directly rendering as native components on iOS, Android, and HarmonyOS, ensuring consistent multi-platform experience from the bottom up. Its core uses a streaming-first architecture, supporting component mounting as soon as they arrive, achieving "generate and present simultaneously"; combined with minimized node differential updates and independent thread asynchronous rendering, frequent incremental updates will not block the main thread.

For developers, AGenUI includes 22 basic components and 45 CSS properties, supporting three-dimensional customization of components, functionality calls, and themes. Its Theme system supports Design Token, allowing the end side to automatically map semantic descriptions into specific styles that comply with brand standards. This means that the interface generated by the Agent not only runs smoothly but also directly aligns with the product's visual standards.
According to the information, based on the above infrastructure capabilities, Amap and the C-end application team of Qwen have completed the Demo verification of the generative UI workflow and will further promote its implementation in real application scenarios.
The collaboration between the two sides essentially combines "complex scenarios" with "AI interaction." Amap has long been dedicated to real-world complex services such as map navigation and local life, accumulating extensive scenario experience for multi-device collaboration; Qwen has continuously invested in large-scale AI application entrances, Agent interaction, and developer ecosystems. By combining Amap's end-side engineering capabilities with Qwen's C-end application AI interaction exploration, this generative UI infrastructure for developers has been created.
Currently, AGenUI has been officially open-sourced. Developers can visit the official website (genui.amap.com) or GitHub (https://github.com/AGenUI/AGenUI) to learn more details or participate in the development.


