Qwen/QwQ-32B-Preview
An experimental preview release large language model developed by the Qwen Team, featuring 32.5B parameters. It is designed to advance AI reasoning and text generation, supporting extended context lengths (up to 32,768 tokens) and built using transformer architectures with RoPE, SwiGLU, and RMSNorm. The model is geared towards research and demonstrates strong capabilities in math and coding, despite noted limitations in language consistency and common sense reasoning.
Key Information
- Category: Language Models
- Source: Huggingface
- Tags: text-generation
- Last updated: March 13, 2026
Structured Metrics
No structured metrics captured yet.
Links
Canonical source: https://huggingface.co/Qwen/QwQ-32B-Preview