Mistral-7B-Instruct-v0.3 - AI Language Models Tool
Overview
Mistral-7B-Instruct-v0.3 is a 7.2-billion-parameter, instruction-tuned language model released by Mistral AI and published on Hugging Face (Apache-2.0). It is built as an instruct-focused variant of the Mistral-7B family and optimized for chat and assistant-style interactions, instruction following, and tool-enabled workflows. According to the Hugging Face model page, the model uses the v3 tokenizer with an extended vocabulary of 32,768 tokens and exposes native function/tool-calling patterns suited for building tool-enabled agents and structured outputs (https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3). The model is compatible with the Hugging Face Transformers ecosystem (versions >= 4.42) and can be run locally or via Mistral’s own inference tooling (mistral-inference CLI and Python client). The model card and repository metadata list Apache-2.0 licensing and reference the base checkpoint mistralai/Mistral-7B-v0.3. Community adoption on Hugging Face indicates broad usage (hundreds of thousands of downloads), making this release a practical choice for developers needing an open, instruction-tuned LLM with tool-calling support.
Model Statistics
- Downloads: 936,189
- Likes: 2340
- Parameters: 7.2B
License: apache-2.0
Model Details
Architecture and size: Mistral-7B-Instruct-v0.3 is a decoder-only transformer model with approximately 7.2 billion parameters (reported on the model page). It is an instruction-tuned variant derived from the Mistral-7B-v0.3 base checkpoint and adapted for assistant/ instruction-following tasks. Tokenizer and vocabulary: The model uses Mistral’s v3 tokenizer and an extended vocabulary of 32,768 tokens, which improves subword coverage for many languages and tokenization efficiency (Hugging Face model card). Capabilities: The model is tuned for chat-style generation, direct instruction following, and structured tool/function calling. It supports native function/tool calling patterns that simplify integrating external tools, APIs, or deterministic response formats into agent workflows. Deployment: Usable with Hugging Face Transformers (>= 4.42) and with Mistral’s mistral-inference (CLI and Python) for low-latency inference and integration into production pipelines. License and distribution: Distributed under the Apache-2.0 license and published on Hugging Face as mistralai/Mistral-7B-Instruct-v0.3 (see model page for full card and provenance).
Key Features
- Instruction-tuned for chat and assistant-style prompts
- v3 tokenizer with extended 32,768 token vocabulary
- Native function/tool-calling patterns for tool-enabled workflows
- Compatible with Hugging Face Transformers (>=4.42)
- Apache-2.0 licensed, derived from Mistral-7B-v0.3 base model
Example Usage
Example (python):
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "mistralai/Mistral-7B-Instruct-v0.3"
# Load tokenizer and model (requires transformers >=4.42)
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
prompt = (
"You are a helpful assistant. Respond concisely.\n"
"User: Summarize the following text in two sentences:\n"
"---\n"
"Mistral-7B-Instruct-v0.3 is an instruction-tuned LLM with native function calling and an extended 32,768-token vocabulary.\n"
"---\n"
)
inputs = tokenizer(prompt, return_tensors="pt")
# Move tensors to the model device
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=120,
do_sample=False,
eos_token_id=tokenizer.eos_token_id
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) Benchmarks
Hugging Face downloads: 936,189 (Source: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
Hugging Face likes: 2,340 (Source: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
Parameters: 7.2B (Source: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
Tokenizer vocabulary size: 32,768 (Source: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
License: Apache-2.0 (Source: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
Key Information
- Category: Language Models
- Type: AI Language Models Tool