Phi-3-mini-4k-instruct
A 3.8B parameter, lightweight instruction-tuned language model by Microsoft built on the Phi-3 datasets. It is designed for robust text generation, logical reasoning, and multi-turn conversation with support for both 4K and 128K token contexts.
Key Information
- Category: Language Models
- Source: Huggingface
- Tags: text-generation
- Last updated: March 03, 2026
Structured Metrics
No structured metrics captured yet.
Links
Canonical source: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct