DeepSeek-R1 Distill Qwen 14B GGUF
A quantized (GGUF) variant of the DeepSeek-R1 reasoning model distilled from Qwen 14B. This model supports a massive 128k context length and is tuned for reasoning and chain-of-thought tasks. It is provided by the lmstudio-community on Hugging Face, incorporating optimizations from llama.cpp.
Key Information
- Category: Language Models
- Source: Huggingface
- Tags: text-generation
- Last updated: March 03, 2026
Structured Metrics
No structured metrics captured yet.
Links
Canonical source: https://huggingface.co/lmstudio-community/DeepSeek-R1-Distill-Qwen-14B-GGUF