Skip to main content
Subscribe to Token Plan to use MiniMax models of all modalities at ultra-low prices!

Model Overview

MiniMax offers multiple text models to meet different scenario requirements. MiniMax-M2.7 achieves or sets new SOTA benchmarks in programming, tool calling and search, office productivity and other scenarios, while MiniMax-M2 is built for efficient coding and Agent workflows.

Supported Models

Model NameContext WindowDescription
MiniMax-M2.7204,800Beginning the journey of recursive self-improvement (output speed approximately 60 tps)
MiniMax-M2.7-highspeed204,800M2.7 Highspeed: Same performance, faster and more agile (output speed approximately 100 tps)
MiniMax-M2.5204,800Peak Performance. Ultimate Value. Master the Complex (output speed approximately 60 tps)
MiniMax-M2.5-highspeed204,800M2.5 highspeed: Same performance, faster and more agile (output speed approximately 100 tps)
MiniMax-M2.1204,800Powerful Multi-Language Programming Capabilities with Comprehensively Enhanced Programming Experience (output speed approximately 60 tps)
MiniMax-M2.1-highspeed204,800Faster and More Agile (output speed approximately 100 tps)
MiniMax-M2204,800Agentic capabilities, Advanced reasoning
For details on how tps (Tokens Per Second) is calculated, please refer to FAQ > About APIs.

MiniMax M2.7 Key Highlights

M2.7 delivers outstanding performance in real-world software engineering, including end-to-end full project delivery, log analysis and bug troubleshooting, code security, machine learning, and more. On the SWE-Pro benchmark, M2.7 scored 56.22%, nearly approaching Opus’s best level. This capability also extends to end-to-end full project delivery scenarios (VIBE-Pro 55.6%) and deep understanding of complex engineering systems on Terminal Bench 2 (57.0%).
In the professional office domain, we have enhanced the model’s expertise and task delivery capabilities across various fields. Its ELO score on GDPval-AA is 1495, the highest among open-source models. M2.7 shows significantly improved ability for complex editing in the Office suite — Excel, PPT, and Word — and can better handle multi-round revisions and high-fidelity editing. M2.7 is capable of interacting with complex environments: across 40 complex skills (each exceeding 2,000 tokens), it still maintains a 97% skill adherence rate.
M2.7 possesses excellent character consistency and emotional intelligence, opening up more room for product innovation.
For more model details, please refer to MiniMax M2.7

Calling Example

1

Install Anthropic SDK (Recommended)

pip install anthropic
2

Set Environment Variables

export ANTHROPIC_BASE_URL=https://api.minimax.io/anthropic
export ANTHROPIC_API_KEY=${YOUR_API_KEY}
3

Call MiniMax-M2.7

Python
import anthropic

client = anthropic.Anthropic()

message = client.messages.create(
    model="MiniMax-M2.7",
    max_tokens=1000,
    system="You are a helpful assistant.",
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Hi, how are you?"
                }
            ]
        }
    ]
)

for block in message.content:
    if block.type == "thinking":
        print(f"Thinking:\n{block.thinking}\n")
    elif block.type == "text":
        print(f"Text:\n{block.text}\n")
4

Example Output

{
  "thinking": "The user is just greeting me casually. I should respond in a friendly, professional manner.",
  "text": "Hi there! I'm doing well, thanks for asking. I'm ready to help you with whatever you need today—whether it's coding, answering questions, brainstorming ideas, or just chatting. What can I do for you?"
}

API Reference

Anthropic API Compatible (Recommended)

Call MiniMax models via Anthropic SDK, supporting streaming output and Interleaved Thinking

OpenAI API Compatible

Call MiniMax models via OpenAI SDK

Text Generation

Call text generation API directly via HTTP requests

Using M2.7 in AI Coding Tools

Use M2.7 in Claude Code, Cursor, Cline and other tools

Contact Us

If you encounter any issues while using MiniMax models:
  • Contact our technical support team through official channels such as email Model@minimax.io
  • Submit an Issue on our Github repository