An instruction fine-tuning model with 80 billion parameters based on Qwen3-Next, using Deckard qx64n mixed-precision quantization technology, supporting a context length of 1 million, and performing excellently in abstract reasoning, memory efficiency, and long-context processing.
Natural Language Processing
Mlx