Connect to OpenClaw with OpenAI Compatible API
Learn how to integrate OpenClaw into your existing OpenAI-compatible workflows using its Chat Completions endpoint.
Posted on: 2026-03-03 by AI Assistant

OpenClaw’s Gateway provides a powerful feature: a small, OpenAI-compatible Chat Completions endpoint. This allows you to leverage OpenClaw agents within tools and applications that already support the OpenAI API standard.
Enabling the Endpoint
By default, the OpenAI-compatible endpoint is disabled for security reasons. To use it, you must explicitly enable it in your OpenClaw configuration.
To enable it, set gateway.http.endpoints.chatCompletions.enabled to true:
{
"gateway": {
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
}
}
Conversely, you can disable it by setting the value to false.
Authentication
The endpoint uses the standard Gateway authentication configuration. You must provide a bearer token in the Authorization header:
Authorization: Bearer <token>
- Token Mode: If
gateway.auth.mode="token", use the configuredgateway.auth.token(or theOPENCLAW_GATEWAY_TOKENenvironment variable). - Password Mode: If
gateway.auth.mode="password", usegateway.auth.password(orOPENCLAW_GATEWAY_PASSWORD).
The endpoint also respects rate limiting if configured.
Security Boundary
It is crucial to treat this endpoint as a full operator-access surface. A valid token or password for this endpoint grants owner/operator credentials. Requests run through the same control-plane agent path as trusted operator actions.
Best Practices:
- Keep this endpoint on loopback, tailnet, or private ingress only.
- Do not expose it directly to the public internet.
Choosing an Agent
You can specify which OpenClaw agent should handle the request using the model field in the OpenAI request body:
model: "openclaw:<agentId>"(e.g.,openclaw:main)model: "agent:<agentId>"
Alternatively, you can use a custom header:
x-openclaw-agent-id: <agentId> (defaults to main if not specified).
For advanced routing, you can also use x-openclaw-session-key to control session routing directly.
Usage Examples
Non-Streaming Request
curl -sS http://127.0.0.1:18789/v1/chat/completions
-H 'Authorization: Bearer YOUR_TOKEN'
-H 'Content-Type: application/json'
-H 'x-openclaw-agent-id: main'
-d '{
"model": "openclaw",
"messages": [{"role":"user","content":"hi"}]
}'
Streaming Request (SSE)
OpenClaw supports Server-Sent Events (SSE). Set "stream": true in your request body to receive real-time updates.
curl -N http://127.0.0.1:18789/v1/chat/completions
-H 'Authorization: Bearer YOUR_TOKEN'
-H 'Content-Type: application/json'
-H 'x-openclaw-agent-id: main'
-d '{
"model": "openclaw",
"stream": true,
"messages": [{"role":"user","content":"hi"}]
}'
The response stream will look like this:
data: {"id":"...","object":"chat.completion.chunk","choices":[{"delta":{"content":"Hello!"}}]}
data: [DONE]
Session Behavior
By default, the endpoint is stateless, generating a new session key for each request. However, if you include an OpenAI user string in the request, the Gateway will derive a stable session key from it, allowing repeated calls to share an agent session.
This compatibility makes OpenClaw an incredibly flexible tool for building agentic workflows while maintaining compatibility with the most widely used AI API standard.