QwQ-32B-Preview-gptqmodel-4bit-vortex-v3
This is a 4-bit quantized version based on the Qwen2.5-32B model, designed for efficient inference and low-resource deployment.
QwQ-32B-Preview-gptqmodel-4bit-vortex-v3 Visit Over Time
Monthly Visits
25296546
Bounce Rate
43.31%
Page per Visit
5.8
Visit Duration
00:04:45