Welcome to the "AI Daily" section! This is your guide to exploring the world of artificial intelligence every day. Every day, we present you with the latest content in the AI field, focusing on developers to help you understand technical trends and innovative AI product applications.
Hot AI products Click to learn more:https://app.aibase.com/zh
1. Rejecting the "fatherly" teaching: OpenAI urgently launches GPT-5.3 Instant, GPT-5.4 is on the way
OpenAI launched GPT-5.3Instant, focusing on solving the "fatherly" teaching issue of ChatGPT, and made breakthroughs in hallucination rate and creative writing.

AiBase Summary:
🧠 GPT-5.3Instant improves user experience, reduces "fatherly" teaching, and becomes more natural and equal.
📊 Hallucination rate is significantly reduced, improving reliability in fields such as medicine and law.
🎨 Creative writing ability is enhanced, and it is better at touching people through detailed descriptions.
2. Write code with just your mouth! Anthropic releases Claude Code voice mode
Anthropic released the Claude Code voice mode, allowing developers to perform programming tasks through voice commands, improving development efficiency and expanding application scenarios. This feature is currently available only to 5% of Windows users and is expected to be fully launched this month.

AiBase Summary:
🎙️ Programming interaction evolution: Claude Code adds a voice mode, supporting voice commands to enable "spoken" code refactoring.
📈 Strong commercial performance: Anthropic revealed its annualized revenue has exceeded $2.5 billion, doubling both revenue and active users within two months.
⏳ Gradual release: Currently, only 5% of Windows users can use it, and it is expected to cover all developers this month.
3. Alibaba's core team gathers for the first time! Ma Yun returns to Yungu School: The impact of AI is beyond imagination
Ma Yun led the core management team of Alibaba and Ant Group to visit Hangzhou Yungu School, discussing educational transformation under the AI wave, emphasizing that education should shift from knowledge-driven to wisdom-driven, focusing on cultivating creativity, independent thinking, and a sense of responsibility, while urging the education sector to quickly adapt to changes brought by AI.

AiBase Summary:
🧠 The arrival of the AI era is faster than expected, with a strong impact on society.
💡 Education should shift from "knowledge-driven" to "wisdom-driven," focusing on cultivating creativity and imagination.
🤝 Ma Yun calls on the education sector to quickly adapt to AI changes and teach children how to use AI tools.
4. OpenClaw can now "train while using": Intelligent agent reinforcement learning training framework AReaL v1.0 stable version released
The stable version of AReaL v1.0 was released, solving the problems of high costs for intelligent agents to access training and lack of continuous evolution capabilities. The framework enables zero-code access to RL training through the Proxy Worker intermediate layer and introduces the native training engine Archon, supporting 5D parallelism to lower the development threshold.

AiBase Summary:
🧠 AReaL v1.0 allows intelligent agents to access reinforcement learning training without modifying code.
🚀 Achieve zero-code access to RL training through the Proxy Worker intermediate layer.
🛠️ Native training engine Archon supports 5D parallelism, lowering the development threshold.
Details link: https://github.com/inclusionAI/AReaL
5. Step3.5Flash of StepXing is fully open-sourced: 196 billion parameters MoE architecture, query volume ranks second in OpenClaw
StepXing announced the open-source release of the Step3.5Flash model, which uses a sparse MoE architecture with 196 billion parameters and activates about 11 billion parameters during inference, achieving high energy efficiency. Its inference speed for code tasks can reach up to 350 TPS, demonstrating the capability to challenge top closed-source models. Currently, the model is active in the open-source community, with over 300,000 downloads, and its query volume ranks second globally in OpenClaw.

AiBase Summary:
🚀 StepXing officially opens the Step3.5Flash model, enhancing developers' ability to build high-performance agents.
🧠 Step3.5Flash uses a sparse MoE architecture with a total of 196 billion parameters, activating about 11 billion parameters during inference.
📈 The model's query volume in OpenClaw has jumped to the global top two, showing strong performance and stability.
6. Lin Junyang, head of Tongyi Qianwen at Alibaba, announces resignation, who once led the Qwen open-source ecosystem
As a core figure at Alibaba's Tongyi Qianwen, Lin Junyang's departure poses challenges to Alibaba's large model strategy and talent retention, reflecting the high frequency of talent movement in the large model sector.

AiBase Summary:
🧠 Lin Junyang is the youngest P10-level technical leader at Alibaba, leading the construction of the Qwen open-source ecosystem.
🔄 His departure caused strong reactions in the AI academic community and developer community, comparable to Sam Altman leaving OpenAI.
🚀 Lin Junyang established a robotics and embodied intelligence group within his team, promoting AI to the physical world.
7. Lightning fast response! Google launches Gemini 3.1 Flash-Lite: First character speed increased by 2.5 times, computing cost reaches new low
Google's Gemini 3.1 Flash-Lite model performs well in response speed and cost-effectiveness, providing developers with a more efficient real-time interaction experience. Its first-character response speed is 2.5 times faster, and the overall output speed is 45% faster. It only requires $0.25 per million input tokens, greatly reducing the cost of AI deployment.

AiBase Summary:
⚡ First-character response speed improved by 2.5 times, significantly optimizing real-time interaction experience.
💰 Input price as low as $0.25 per million tokens, lowering the threshold for large-scale AI deployment.
🧠 New "thinking level" adjustment function, supporting flexible switching between efficiency and depth reasoning.
8. Light as a butterfly! iFlytek AI Glasses make their global debut at MWC 2026: First "lip reading" noise reduction, a translation assistant is right in front of you
iFlytek AI glasses were released at MWC 2026, with an ultra-lightweight design of 40 grams and an innovative multi-modal noise reduction technology based on lip-reading, solving the recognition difficulties of traditional AI translation devices in noisy environments, providing a more natural and efficient solution for cross-border communication.

AiBase Summary:
🧠 Ultra-lightweight design of 40 grams, solving the problem of heavy wear in AR/AI glasses.
👄 Multi-modal noise reduction technology based on lip-reading, increasing speech recognition accuracy by more than 50%.
🗣️ Realize multi-modal simultaneous interpretation, supporting real-time subtitles and synchronized playback of translations.