MiniMax M1
MiniMax’s latest open-weight foundation model released in June 2025 as part of MiniMaxWeek.
Key Specifications
- Release Date: June 2025
- License: Open-weight (free download on GitHub)
- Architecture: Hybrid Mixture-of-Experts with Lightning Attention
- Compute Efficiency: 75% reduction vs DeepSeek R1
- Training Method: CISPO (halves training steps needed)
Technical Innovations
Lightning Attention
Proprietary attention mechanism enabling efficient long-context processing with reduced memory and compute requirements.
CISPO Training
Novel training methodology that achieves equivalent model performance in half the training steps compared to conventional approaches.
Efficiency Gains
- 75% compute reduction compared to DeepSeek R1
- Optimized for both inference speed and training cost
- Suitable for resource-constrained deployments
Availability
- Free download on GitHub
- API access through MiniMax platform
- Enterprise deployment options
Competitive Position
Positioned against:
- DeepSeek R1
- Llama 3.x series
- Qwen models
- Mistral Large
See Also
- MiniMax-Text-01 - Earlier text model
- MiniMax-VL-01 - Vision-language model
- MiniMax Speech-02 - Text-to-speech model