MiniMax-M1

MiniMax-M1 is an open-weight, large-scale hybrid-attention reasoning model built using a hybrid Mixture-of-Experts architecture with a lightning attention mechanism. It supports an extended context length of up to 1 million tokens and is optimized with reinforcement learning for tasks ranging from mathematical reasoning to complex software engineering environments.

Key Information

  • Category: Language Models
  • Source: Github
  • Tags: Python
  • Last updated: February 24, 2026

Structured Metrics

No structured metrics captured yet.

Links

Canonical source: https://github.com/MiniMax-AI/MiniMax-M1