Reasoning
Advanced reasoning capabilities with the Responses API
The Responses API supports advanced reasoning capabilities, allowing models to show their internal reasoning process with configurable effort levels.
Reasoning Configuration
Configure reasoning behavior using the reasoning parameter:
const response = await fetch('https://llm.onerouter.pro/v1/responses', {
method: 'POST',
headers: {
'Authorization': 'Bearer <<API_KEY>>',
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'o4-mini',
input: 'What is the meaning of life?',
reasoning: {
effort: 'high'
},
max_output_tokens: 9000,
}),
});
const result = await response.json();
console.log(result);import requests
response = requests.post(
'https://llm.onerouter.pro/v1/responses',
headers={
'Authorization': 'Bearer <<API_KEY>>',
'Content-Type': 'application/json',
},
json={
'model': 'o4-mini',
'input': 'What is the meaning of life?',
'reasoning': {
'effort': 'high'
},
'max_output_tokens': 9000,
}
)
result = response.json()
print(result)Reasoning Effort Levels
The effort parameter controls how much computational effort the model puts into reasoning:
minimal
Basic reasoning with minimal computational effort
low
Light reasoning for simple problems
medium
Balanced reasoning for moderate complexity
high
Deep reasoning for complex problems
Complex Reasoning Example
For complex mathematical or logical problems:
Reasoning in Conversation Context
Include reasoning in multi-turn conversations:
Streaming Reasoning
Enable streaming to see reasoning develop in real-time:
Best Practices
Choose appropriate effort levels: Use
highfor complex problems,lowfor simple tasksConsider token usage: Reasoning increases token consumption
Use streaming: For long reasoning chains, streaming provides better user experience
Include context: Provide sufficient context for the model to reason effectively
Next Steps
Explore Tool Calling with reasoning
Review Basic Usage fundamentals
Last updated