Mistral-7B-v0.1
Mistral-7B-v0.1 is a 7B-parameter pretrained generative language model by Mistral AI. It features grouped-query attention, sliding-window attention, and a byte‑fallback BPE tokenizer, delivering strong text generation performance (reported to outperform Llama 2 13B on tested benchmarks). It is an open-weight base model (no built-in moderation), released under the Apache-2.0 license and available on Hugging Face for use with Transformers.
Key Information
- Category: Language Models
- Source: Huggingface
- Tags: text-generation
- Last updated: February 24, 2026
Structured Metrics
No structured metrics captured yet.
Links
Canonical source: https://huggingface.co/mistralai/Mistral-7B-v0.1