After launching the Qwen3.5 series in February this year, Tongyi Lab officially released

Key Upgrades: Focusing on Coding Agent and Long Context
The main focus of Qwen3.6-Plus is to organically integrate deep logical reasoning, massive memory, and precise execution. Its core advantages include:
Leap in Coding Ability: It performs exceptionally well in scenarios such as front-end page generation, code repair, and terminal automation. As the first version in domestic models of the same size to achieve comprehensive leadership in agent programming, it offers a more stable agent experience at a lower cost.
Million-Level Context: It defaults to supporting a 1 million character context window, significantly improving the accuracy of long document parsing and multi-turn dialogue information extraction.
Excellent Cost-Performance Ratio: The model size is less than half of K2.5 or GLM5, but its engineering implementation capabilities are closely following industry benchmarks.
Ecosystem Compatibility: Seamless Integration with Mainstream Development Tools
To help developers get started immediately, Qwen3.6-Plus has achieved deep compatibility with multiple third-party coding assistants:
OpenClaw (formerly Moltbot): An open-source AI coding agent that supports self-hosting. With simple configuration, it can provide a complete agent coding experience in the terminal.Qwen Code: A terminal agent specifically optimized for the Qwen series, supporting complex codebase understanding and automated tasks.
Claude Code: The Qwen API now supports the Anthropic protocol, allowing developers to directly call Qwen3.6-Plus within the Claude Code workflow.
Visual Agent: From "Seeing" to "Executing"
In the multimodal field, Qwen3.6-Plus has achieved a closed-loop from visual perception to agent execution. The model not only performs complex financial calculations (such as automatically calculating the winning amount and profit of multiple scratch cards) through visual input, but can also generate front-end code directly from design drafts. This "visual agent" capability allows it to understand GUI interfaces and perform the next step of actions, gradually evolving into a native multimodal system that can continuously perceive in real environments.
Additionally, Tongyi Lab has introduced the preserve_thinking feature for the API, which allows retaining the thinking chain content from previous rounds. This is particularly beneficial for complex agent tasks requiring long-term planning. According to reports, more versions of the Qwen3.6 series (including high-performance and lightweight open-source versions) will be released in the near future.



