MiniMax-01 icon

MiniMax-01

CreatedMiniMax-AI

Provides access to large language and vision-language models based on linear attention for various natural language and multimodal tasks.

About

MiniMax-01 is the official repository for MiniMax-Text-01, a large language model with 456 billion parameters, and MiniMax-VL-01, a vision-language model leveraging a ViT-MLP-LLM framework. MiniMax-Text-01 achieves long-context capabilities through a hybrid architecture integrating Lightning Attention, Softmax Attention, and Mixture-of-Experts (MoE), enabling it to handle up to 4 million tokens during inference. MiniMax-VL-01 extends this with enhanced visual capabilities and dynamic resolution mechanisms, achieving top-level performance on multimodal leaderboards.

Key Features

  • Vision-language model with dynamic resolution mechanism
  • 2,495 GitHub stars
  • Hybrid attention architecture (Lightning Attention, Softmax Attention, MoE)
  • Training context length of 1 million tokens, inference up to 4 million tokens
  • Large language model with 456 billion parameters
  • ViT-MLP-LLM framework for multimodal capabilities

Use Cases

  • Academic research and benchmarking of large language models
  • Development of applications requiring long-context understanding
  • Multimodal task solving using vision and language