OpenAI Responses API
OpenAI-compatible Responses API
Oneouter’s Responses API provides OpenAI-compatible access to multiple AI models through a unified interface, designed to be a drop-in replacement for OpenAI’s Responses API.
This stateless API offers enhanced capabilities including reasoning, tool calling, and web search integration, with each request being independent and no server-side state persisted.
Base URL
https://llm.onerouter.pro/v1/responsesAuthentication
All requests require authentication using your OneRouter API key:
const response = await fetch('https://llm.onerouter.pro/v1/responses', {
method: 'POST',
headers: {
'Authorization': 'Bearer <<API_KEY>>',
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'o4-mini',
input: 'Hello, world!',
}),
});
import requests
response = requests.post(
'https://llm.onerouter.pro/v1/responses',
headers={
'Authorization': 'Bearer <<API_KEY>>',
'Content-Type': 'application/json',
},
json={
'model': 'o4-mini',
'input': 'Hello, world!',
}
)
curl -X POST https://llm.onerouter.pro/v1/responses \
-H "Authorization: Bearer <<API_KEY>>" \
-H "Content-Type: application/json" \
-d '{
"model": "o4-mini",
"input": "Hello, world!"
}'
Core Features
Learn the fundamentals of making requests with simple text input and handling responses.
Access advanced reasoning capabilities with configurable effort levels and encrypted reasoning chains.
Integrate function calling with support for parallel execution and complex tool interactions.
Error Handling
The API returns structured error responses:
{
"error": {
"code": "invalid_prompt",
"message": "Missing required parameter: 'model'."
},
"metadata": null
}For comprehensive error handling guidance, see Error Handling.
Last updated