/chat/completions1–5 creditsChat Completions
OpenAI-compatible chat completions endpoint. Supports 24 LLMs across GPT, Claude, Gemini, Grok, DeepSeek, Qwen, Llama, Mistral, and more — all routed through a single endpoint using the standard OpenAI messages format. Synchronous: returns the full response immediately (no task_id, no polling).
Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Required | The LLM to use.deepseek-chatdeepseek-reasonerqwen-turboqwen-plusllama-3.3-70bllama-4-scoutllama-4-maverickgemma-3-27bgpt-4o-miniclaude-haiku-4-5gemini-2.0-flashgrok-3-miniqwen-maxmistral-largegpt-4oclaude-sonnet-4-5gemini-2.5-progrok-3o3o4-miniclaude-opus-4gemini-2.5-ultra |
| messages | array | Required | Array of message objects in OpenAI format: `[{ "role": "user", "content": "..." }]`. Supports system, user, and assistant roles. |
| temperature | number | Optional | Sampling temperature between 0 and 2. Lower = more deterministic. Not supported by o3/o4-mini reasoning models.Default: 1 |
| max_tokens | number | Optional | Maximum tokens to generate in the response. |
| stream | boolean | Optional | Stream the response as Server-Sent Events (SSE) in OpenAI chunk format. Set to true for real-time output.Default: false |
Example Request
# Basic chat
curl -X POST https://journeyapi.co/api/v1/chat/completions \
-H "Authorization: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Explain how the Midjourney --sref parameter works." }
]
}'
# Streaming response
curl -X POST https://journeyapi.co/api/v1/chat/completions \
-H "Authorization: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-5",
"messages": [{ "role": "user", "content": "Write a short poem about APIs." }],
"stream": true
}'
# Budget model (1 credit)
curl -X POST https://journeyapi.co/api/v1/chat/completions \
-H "Authorization: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-chat",
"messages": [{ "role": "user", "content": "What is the capital of France?" }],
"temperature": 0.2
}'Response
{
"id": "chatcmpl-DMmQYTAuYJ82MDwKr1LvCXH2APXaG",
"object": "chat.completion",
"created": 1774321742,
"model": "gpt-4o-2024-08-06",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 13,
"completion_tokens": 9,
"total_tokens": 22
}
}Response Fields
| Field | Type | Description |
|---|---|---|
| id | string | Unique completion ID (e.g. chatcmpl-...). |
| object | string | Always "chat.completion". |
| model | string | The upstream model ID used. |
| choices | array | Array of completion choices, each with a message object containing role and content. |
| usage | object | Token usage stats: prompt_tokens, completion_tokens, total_tokens. |
Tips
- 1The endpoint is fully OpenAI SDK-compatible — just change the base URL and API key. No other code changes needed.
- 2Use budget models (deepseek-chat, llama-4-scout, qwen-turbo) for high-volume or non-critical tasks at 1 credit each.
- 3Reasoning models (o3, o4-mini) do not support temperature. Omit it or the request will error.
- 4For streaming, set `stream: true` and handle the SSE response as `data: {...}` lines, same as the OpenAI API.
- 5claude-sonnet-4-5 and claude-opus-4 follow Anthropic's content policy and may refuse certain requests that GPT models allow.
Quirks & Gotchas
Unlike image endpoints, chat returns the result directly — there is no task_id and no polling step.
Credits are charged per request regardless of response length. Token counts do not affect credit cost.
The upstream model ID may differ from the JourneyAPI model ID (e.g. gpt-4o maps to gpt-4o-2024-08-06 upstream).
Expert Tips & Best Practices
OpenAI SDK drop-in replacement
You can use the official OpenAI Python or Node.js SDK by pointing it at JourneyAPI. Set `base_url="https://journeyapi.co/api/v1"` and `api_key="YOUR_JAPI_KEY"`. Every model string in the SUPPORTED_CHAT_MODELS list works as-is.