NVIDIA GPT-OSS-120B Eagle3 is an optimized version based on the OpenAI gpt-oss-120b model. It adopts the Mixture of Experts (MoE) architecture, with a total of 120 billion parameters and 5 billion active parameters. This model supports both commercial and non-commercial use and is suitable for text generation tasks, especially for the development of AI Agent systems, chatbots, and other applications.
Natural Language Processing
Safetensors