AIbase Report: On March 18, 2026, Google Labs quietly released a major update to Stitch. This upgrade is not just about adding a few features, but rather redefining the AI design tool itself — from a "prompt-to-UI" assistant tool to an "AI-native software design canvas." Users are no longer simply asking AI to "generate a few images," but instead can co-create, iterate, and collaborate on the entire design process with AI on an infinite canvas.

AI-Native Infinite Canvas: More Than Just Big Space, It Understands Context

One of the core highlights of this update is the new AI-native infinite canvas. Traditional tools often limit themselves to single-prompt generation, while the new Stitch can handle multiple context inputs such as images, text, and code simultaneously. It is no longer just a "one-time generator," but more like an intelligent design workstation that understands the evolution of the entire project and supports multi-directional parallel exploration.

QQ20260319-092113.png

The canvas also includes a light mode (eye-friendly mode), making long hours of design more comfortable. Whether you upload hand-drawn sketches, existing code snippets, or product requirements documents, the AI can intelligently integrate and reason through them.

Smarter Design Agent: Project-Level Understanding and Parallel Management

Google has clearly enhanced the capabilities of the Design Agent. The new Agent is no longer limited to single-screen operations but can understand the context evolution of the entire canvas. It supports a Agent Manager, allowing users to explore multiple design directions in parallel — for example, generating a mobile variant on one side while optimizing a desktop layout on the other, and even automatically generating product briefs or brand landing pages.

Additionally, the Agent can easily handle common iteration requests, such as replacing logos or adapting across devices, significantly improving design efficiency.

Voice Interaction Enters the Main Workflow: “Vibe Design” with Voice Commands

Another revolutionary change is the deep integration of voice interaction. Now, you can directly speak to the canvas, letting Stitch listen, modify the design, and provide instant feedback in real time. This is the “Vibe Design” concept introduced by Google — starting from high-level intentions like “what feeling do you want,” rather than getting bogged down in pixel-level details.

In voice mode, users can request design reviews, generate variations, or even navigate and operate the entire canvas purely through voice, greatly lowering the design barrier and enabling non-professionals to get started quickly.

Instant Prototyping: Static Designs Turn Into Interactive Flows Instantly

The updated Stitch now includes a real-time prototype feature. Users just need to click the "Play" button, and static designs can be converted into interactive prototypes in seconds. The AI can also infer and complete the next screen logic based on user clicks, supporting multiple state transitions (such as login/logout) and generating shareable links or QR codes for mobile preview.

This drastically shortens the cycle from idea to testable prototype, achieving a truly seamless "what you see is what you get" experience.

Design System + DESIGN.md: A New Standard for Consistency and Reusability

Many may underestimate the fifth major highlight of this update — DESIGN.md. Google has transformed design rules into a Markdown format that is friendly to Agents, supporting import, export, and reuse, and even automatically extracting existing design systems from any URL.

When starting a new project, the AI will automatically generate interfaces based on a unified design system, ensuring brand consistency. Once the rules are changed, all content on the canvas can be updated synchronously, which is particularly beneficial for team collaboration and large-scale projects.

AIbase Commentary: Stitch in 2025 was more like a powerful UI generator; after this 2026 update, it has evolved into a true AI design workbench. Google is proving through action that AI is not just a design assistant, but a full-process partner that can deeply participate in capturing intentions and validating prototypes. This not only lowers the design barrier but may also reshape the upstream of the entire software development process.