Connect Kimi, GLM-5, and MiniMax to OpenClaw via OpenRouter

Connect Kimi, GLM-5, and MiniMax to OpenClaw via OpenRouter

OpenClaw is one of the hottest open-source AI Agent projects of 2026, often called "the operating system for personal AI." It drives large language models through mainstream IM platforms to automatically execute tasks — sending emails, writing code, managing files, and even operating your entire computer to boost personal productivity. For users who find it difficult to purchase Anthropic or OpenAI API keys directly, OpenRouter provides a convenient alternative for topping up and accessing these models.

While Claude 4.6 is the best fit for OpenClaw, it's expensive. For everyday tasks, Chinese-made models like Kimi K2.5, Z.ai (Zhipu) GLM-5 Turbo, and MiniMax M2.7 are more than capable — and far more cost-effective. OpenRouter routes to the international versions of these models, which typically have more relaxed content filtering and better overall performance compared to their domestic counterparts, at prices close to the domestic APIs. With OpenRouter, you can access all these models with a single account instead of registering and topping up with each provider separately.

This guide shows you how to use OpenRouter as a middleware layer — one API key for all models — then configure custom Providers in OpenClaw to seamlessly drive these Chinese-made models.

OpenRouter supporting 300+ models


Why OpenRouter?

OpenRouter is a large, reputable AI model API routing platform supporting 300+ models with a unified interface and unified billing. For users outside the US, it offers three core advantages:

Regarding fees, OpenRouter charges a 5% top-up fee plus $0.35 per transaction. So a $10 top-up yields approximately $9.15 in usable credit. Considering the hassle saved from setting up virtual cards, this fee is perfectly acceptable.


Comparison of Three Models

Here's how the three models compare on OpenRouter in terms of pricing and capabilities:

Model Input Price Output Price Context Vision Reasoning Key Strengths
Chinese Models (via OpenRouter Custom Provider)
Kimi K2.5 (Moonshot AI) moonshotai/kimi-k2.5 $0.45/M $2.20/M 262K Yes Yes Multimodal, vision encoding, ultra-long context; SWE-bench ML surpasses GPT-5.2
GLM-5 Turbo (Zhipu Z.ai) z-ai/glm-5-turbo $0.96/M $3.20/M 202K No Yes Agent-specific optimization, zero-error tool calling; 744B MoE, max output 128K
MiniMax M2.5 minimax/minimax-m2.5 $0.20/M $1.17/M 197K No Yes Best cost-performance ratio, Office file operations; SWE-Bench 80.2%
MiniMax M2.7 minimax/minimax-m2.7 $0.30/M $1.20/M 205K No Yes Multi-agent collaboration, self-evolution; SWE-Pro 56.2%, Terminal Bench 57.0%
Reference (OpenRouter built-in, directly usable)
Claude Sonnet 4.6 (Anthropic) $3.00/M $15.00/M 1M Yes Yes Strongest coding/agent at the Sonnet tier; output costs 13× that of M2.5
Claude Opus 4.6 (Anthropic) $5.00/M $25.00/M 1M Yes Yes Anthropic's flagship, best for long tasks; output costs 21× that of M2.5

Quick summary: For daily OpenClaw use, MiniMax M2.5 is the cheapest. For complex reasoning and multimodal tasks, use Kimi K2.5. For ultra-stable, long-chain tool calling and high-reliability scenarios, use GLM-5 Turbo.


OpenRouter Top-Up Guide

  1. Go to openrouter.ai, register and log in with your email or GitHub account.
  2. Click your avatar in the top-right corner, select Credits, or go directly to https://openrouter.ai/settings/credits.
  3. Click Add Credits. In the pop-up window, toggle on "Use one-time payment methods" — this is a crucial step; without it, you won't see alternative payment options.
  4. Select your preferred payment method, fill in a billing address, and complete the payment.
  5. After payment, return to the Credits page to confirm your balance has arrived.

It's recommended to top up at least $10 at once, because OpenRouter's free model quota policy requires a cumulative $10 in top-ups to unlock 1,000 free daily calls; otherwise you only get 50.

After topping up, go to the Keys page and create an API Key (starts with sk-or-). Copy and save it securely.


OpenClaw Configuration — Step by Step

This is the key part of this guide. OpenClaw's built-in OpenRouter driver works well with Claude-series models out of the box. However, for Kimi, GLM-5 Turbo, and MiniMax, the built-in driver's compatibility is insufficient — in testing (v2026.3.13-1), issues were found with tool calling, reasoning parameter passing, and other areas.

The solution is to manually add a custom OpenAI-compatible Provider in OpenClaw's config.json, pointing to OpenRouter's API endpoint, and then define model parameters individually. This way, OpenClaw communicates with OpenRouter using the standard OpenAI Completions API protocol, which provides much better compatibility.

Below is the complete configuration. Add it to your OpenClaw config.json:

1. Add a Custom Provider

Under models.providers, add a new openroutercustom node:

"openroutercustom": {
  "baseUrl": "https://openrouter.ai/api/v1",
  "apiKey": "${OPENROUTER_API_KEY}",
  "api": "openai-completions",
  "models": [
    {
      "id": "moonshotai/kimi-k2.5",
      "name": "Kimi K2.5",
      "reasoning": true,
      "input": ["text", "image"],
      "cost": {
        "input": 0.45,
        "output": 2.2,
        "cacheRead": 0,
        "cacheWrite": 0
      },
      "contextWindow": 262144,
      "maxTokens": 8192
    },
    {
      "id": "minimax/minimax-m2.5",
      "name": "MiniMax M2.5",
      "reasoning": true,
      "input": ["text"],
      "cost": {
        "input": 0.2,
        "output": 1.17,
        "cacheRead": 0,
        "cacheWrite": 0
      },
      "contextWindow": 196608,
      "maxTokens": 8192
    },
    {
      "id": "minimax/minimax-m2.7",
      "name": "MiniMax M2.7",
      "reasoning": true,
      "input": ["text"],
      "cost": {
        "input": 0.3,
        "output": 1.2,
        "cacheRead": 0,
        "cacheWrite": 0
      },
      "contextWindow": 204800,
      "maxTokens": 8192
    },
    {
      "id": "z-ai/glm-5-turbo",
      "name": "GLM-5 Turbo",
      "reasoning": true,
      "input": ["text"],
      "cost": {
        "input": 0.96,
        "output": 3.2,
        "cacheRead": 0,
        "cacheWrite": 0
      },
      "contextWindow": 202752,
      "maxTokens": 8192
    }
  ]
}

2. Register Models in Agent Configuration

Under agents.defaults.models, add corresponding entries:

"openroutercustom/moonshotai/kimi-k2.5": {
  "alias": "Kimi K2.5"
},
"openroutercustom/minimax/minimax-m2.5": {
  "alias": "MiniMax M2.5"
},
"openroutercustom/minimax/minimax-m2.7": {
  "alias": "MiniMax M2.7"
},
"openroutercustom/z-ai/glm-5-turbo": {
  "alias": "GLM-5 Turbo"
}

3. Set Default Model (Optional)

To make OpenClaw default to a specific Chinese model, modify agents.defaults.model.primary:

"model": {
  "primary": "openroutercustom/moonshotai/kimi-k2.5"
}

4. Configure Environment Variable

In OpenClaw's .env file or your system environment variables, set your OpenRouter API Key:

export OPENROUTER_API_KEY="sk-or-v1-your-key-here"

Configuration Notes

Why not use OpenClaw's built-in OpenRouter driver directly? Testing showed that the built-in openrouter/ prefix driver works well for Claude-series models (e.g., openrouter/anthropic/claude-sonnet-4-5) with no extra configuration needed. However, when switching to Kimi, GLM, or MiniMax, compatibility issues arise with tool-calling parameter formats, reasoning token handling, and certain API response field parsing — often resulting in no response at all. By using a custom openroutercustom Provider with "api": "openai-completions", OpenClaw communicates via the standard OpenAI-compatible protocol, and these issues are resolved.

Model IDs must use the full path from OpenRouter. For example, Kimi K2.5's path on OpenRouter is moonshotai/kimi-k2.5, GLM-5 Turbo is z-ai/glm-5-turbo, and MiniMax M2.5 is minimax/minimax-m2.5. This ID is passed directly as the model parameter to the OpenRouter API — a typo means the model won't be found.

What reasoning: true means. All three models support thinking/reasoning mode. When set to true, OpenClaw enables the model's internal chain-of-thought on tasks requiring complex reasoning, improving task decomposition and tool-calling accuracy. You can set it to false for simple everyday tasks to save on token consumption.

The cost field. OpenClaw uses this to estimate the cost of each conversation, helping you monitor your budget. Values are in USD per million tokens — just copy them from OpenRouter's pricing page.

maxTokens in OpenClaw's configuration refers to the maximum number of tokens the model can output in a single response, not the context window size. The official default of 8192 makes sense — in agent scenarios, each conversation turn doesn't need extremely long outputs, and 8192 tokens is more than enough for most tool calls and replies.


Daily Usage Recommendations

Once configured, you can switch between models at any time in OpenClaw's chat interface. Here are some practical pairing suggestions:

MiniMax M2.5 for daily conversations and lightweight tasks — at $0.20/M input and $1.17/M output, it's incredibly affordable, and its SWE-Bench score of 80.2% shows solid coding capability.

Kimi K2.5 for tasks requiring image understanding and multimodal analysis — it's the only model among the three that supports image input, and its 262K ultra-long context window is ideal for processing large codebases and lengthy documents.

GLM-5 Turbo for stable long-chain tool calling, scheduled tasks, and continuous agent workflows — it was specifically optimized for OpenClaw-style scenarios during training, with zero-error tool calling as its core selling point.

MiniMax M2.7 for complex workflows requiring multi-agent collaboration and self-correction — its multi-agent system capabilities were specifically trained for this purpose.

Of course, Claude Sonnet remains the strongest overall choice — it's just expensive. Using Chinese models as your daily workhorse while reserving Claude for critical tasks is currently the best cost-performance strategy.