AIBase
Home
AI NEWS
AI Tools
AI Models
MCP
AI Services
AI Compute
AI Tutorial
EN

AI News

View More

New Benchmark for 30B Specifications! Zhipu AI Open Sources GLM-4.7-Flash, Outperforming Alibaba and OpenAI in Multiple Tests

GLM-4.4-Flash, a 30B-A3B MoE model with 30B activated parameters, excels in reasoning and coding, topping performance charts for its size.....

13.7k 1 hours ago
New Benchmark for 30B Specifications! Zhipu AI Open Sources GLM-4.7-Flash, Outperforming Alibaba and OpenAI in Multiple Tests

Models

View More

Tongyi DeepResearch 30B A3B MXFP4_MOE GGUF

noctrex

T

This is a quantized version of the Alibaba Tongyi Deep Research 30B-A3B model. It uses MXFP4_MOE quantization technology and additionally adds imatrix quantization, aiming to optimize model performance and resource utilization efficiency, and is suitable for text generation tasks.

Natural Language ProcessingGgufGguf
noctrex
302
1

Qwen3 30B A1.5B High Speed GGUF

Mungert

Q

An efficient inference model fine-tuned based on Qwen 30B-A3B (MOE), achieving nearly double the speed improvement by reducing the number of experts, supporting multiple quantization formats and a 40K context length.

Natural Language ProcessingTransformersTransformers
Mungert
732
1
AIBase
Empowering the future, your artificial intelligence solution think tank
English简体中文繁體中文にほんご
FirendLinks:
AI Newsletters AI ToolsMCP ServersAI NewsAIBaseLLM LeaderboardAI Ranking
© 2026AIBase
Business CooperationSite Map