Skip to main content

API Documentation

Everything you need to integrate with Opulon API

Overview

Opulon API is a Claude-compatible AI gateway. Point your existing Claude integration to our endpoint, swap in your Opulon API key and everything works instantly , no code rewrites, no new SDKs.

We support the Messages API with full streaming support, all current Claude models and the same request/response format you already use.

Quickstart

1Get your API key

Purchase a plan and get your API key from the Opulon Status Page. Your key will look like op-ul-e4st85dasf.

2Set the base URL

bash
export ANTHROPIC_API_KEY="op-ul-e4st85dasf"
export ANTHROPIC_BASE_URL="https://api.opulonapi.com"

3Make your first request

bash
curl https://api.opulonapi.com/v1/messages \
  --header "x-api-key: $ANTHROPIC_API_KEY" \
  --header "anthropic-version: 2023-06-01" \
  --header "content-type: application/json" \
  --data '{
    "model": "claude-sonnet-4-20250514",
    "max_tokens": 1024,
    "messages": [
      {"role": "user", "content": "Hello, Claude"}
    ]
  }'

Response

json
{
  "id": "msg_01XFDUDYJgAACzvnptvVoYEL",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "Hello! How can I assist you today?"
    }
  ],
  "model": "claude-sonnet-4-20250514",
  "stop_reason": "end_turn",
  "usage": {
    "input_tokens": 12,
    "output_tokens": 8
  }
}

Authentication

All requests must include these headers:

HeaderValueDescription
x-api-keyop-ul-e4st85dasfYour Opulon API key
anthropic-version2023-06-01API version string
content-typeapplication/jsonRequest body format

If you're using the official Anthropic SDK, set the API key and base URL , the SDK handles headers automatically.

Base URL

text
https://api.opulonapi.com

Replace Anthropic's default base URL (https://api.anthropic.com) with the Opulon endpoint. All paths remain identical.

Python (Anthropic SDK)

python
import anthropic

client = anthropic.Anthropic(
    api_key="op-ul-e4st85dasf",
    base_url="https://api.opulonapi.com"
)

message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Hello, Claude"}
    ]
)
print(message.content[0].text)

TypeScript (Anthropic SDK)

typescript
import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic({
  apiKey: "op-ul-e4st85dasf",
  baseURL: "https://api.opulonapi.com"
});

const message = await client.messages.create({
  model: "claude-sonnet-4-20250514",
  max_tokens: 1024,
  messages: [
    { role: "user", content: "Hello, Claude" }
  ]
});
console.log(message.content[0].text);

OpenAI SDK (Compatible)

python
from openai import OpenAI

client = OpenAI(
    api_key="op-ul-e4st85dasf",
    base_url="https://api.opulonapi.com/v1"
)

response = client.chat.completions.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Hello, Claude"}
    ]
)
print(response.choices[0].message.content)

Messages API

POST /v1/messages

Send a structured list of messages to Claude and receive a response. Supports multi-turn conversations, system prompts and tool use.

Request Body

ParameterTypeRequiredDescription
modelstringYesModel ID to use (e.g. claude-sonnet-4-20250514)
messagesarrayYesList of message objects with role and content
max_tokensintegerYesMaximum tokens to generate
systemstringNoSystem prompt to set context
temperaturefloatNoRandomness (0.0–1.0, default 1.0)
top_pfloatNoNucleus sampling threshold
streambooleanNoEnable server-sent events streaming
stop_sequencesarrayNoCustom stop sequences

Streaming

Set stream: true to receive responses as server-sent events (SSE). Tokens are delivered as they are generated.

bash
curl https://api.opulonapi.com/v1/messages \
  --header "x-api-key: $ANTHROPIC_API_KEY" \
  --header "anthropic-version: 2023-06-01" \
  --header "content-type: application/json" \
  --data '{
    "model": "claude-sonnet-4-20250514",
    "max_tokens": 1024,
    "stream": true,
    "messages": [
      {"role": "user", "content": "Write a haiku about APIs"}
    ]
  }'

Event Types

EventDescription
message_startContains the message object with metadata
content_block_startStart of a content block
content_block_deltaIncremental text content
content_block_stopEnd of a content block
message_deltaStop reason and final usage
message_stopEnd of the message

Models

The following models are available through Opulon API. Use the model ID in your requests.

ModelModel IDMax OutputBest For
Claude Opus 4claude-opus-4-2025051432,000Complex reasoning, research
Claude Sonnet 4claude-sonnet-4-2025051416,000Balanced speed & quality
Claude Haiku 3.5claude-3-5-haiku-202410228,192Fast, cost-efficient tasks

All models support 200K token context windows. Model availability may vary by plan.

Rate Limits

Rate limits are configured per API key based on your plan. When limits are reached, the API returns a 429 status code.

Limit TypeDescription
Requests per minuteMaximum number of API calls per minute (varies by plan)
Token budgetRolling 5-hour window token limit (input + output combined)

View your current usage and remaining limits in your status page.

Error Handling

Errors are returned as JSON with an error object:

json
{
  "type": "error",
  "error": {
    "type": "invalid_request_error",
    "message": "model: field required"
  }
}

HTTP Status Codes

CodeTypeDescription
400invalid_request_errorInvalid or missing parameters
401authentication_errorInvalid API key
403permission_errorKey does not have access to the requested resource
404not_found_errorRequested resource not found
429rate_limit_errorRate limit or token budget exceeded
500api_errorInternal server error
529overloaded_errorUpstream provider is overloaded

SDKs & Libraries

Since Opulon is fully Claude-compatible, you can use the official Anthropic SDKs. Just set the base URL to https://api.opulonapi.com.

You can also use the OpenAI SDK with Opulon , set the base URL to https://api.opulonapi.com/v1 and use your Opulon key as the API key.