Messages
Send a structured list of input messages with text and/or image content, and the model will generate the next message in the conversation.
The Messages API can be used for either single queries or stateless multi-turn conversations.
Your unique API key for authentication.
sk-lh7cVWdVZx7lFkVmf92yFaz3uwWwiwMXfOESPnbpLtPKB3OiThe model that will complete your prompt.
claude-3-7-sonnet-20250219The maximum number of tokens to generate before stopping.
1024Custom text sequences that will cause the model to stop generating.
Our models will normally stop when they have naturally completed their turn, which will result in a response stop_reason of "end_turn".
If you want the model to stop generating when it encounters custom strings of text, you can use the stop_sequences parameter. If the model encounters one of the custom sequences, the response stop_reason value will be "stop_sequence" and the response stop_sequence value will contain the matched stop sequence.
Whether to incrementally stream the response using server-sent events.
Amount of randomness injected into the response.
1Only sample from the top K options for each subsequent token.
5Use nucleus sampling.
0.7POST /v1/messages HTTP/1.1
Host: llm.onerouter.pro
x-api-key: text
Content-Type: application/json
Accept: */*
Content-Length: 715
{
"model": "claude-3-7-sonnet@20250219",
"messages": [
{
"role": "user",
"content": "Hello there."
},
{
"role": "assistant",
"content": "Hi, I'm Claude. How can I help you?"
},
{
"role": "user",
"content": "Can you explain LLMs in plain English?"
}
],
"max_tokens": 1024,
"stop_sequences": [],
"stream": false,
"system": [
{
"text": "Today's date is 2024-06-01.",
"type": "text"
}
],
"temperature": 1,
"thinking": {
"type": "disabled"
},
"tool_choice": {
"type": "auto"
},
"tools": [
{
"name": "get_stock_price",
"description": "Get the current stock price for a given ticker symbol.",
"input_schema": {
"type": "object",
"properties": {
"ticker": {
"type": "string",
"description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
}
},
"required": [
"ticker"
]
}
}
],
"top_k": 5,
"top_p": 0.7
}{
"model": "claude-3-7-sonnet-20250219",
"id": "msg_bdrk_01HMw9DPvsUxr3nFe4XEBoPg",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "# Large Language Models (LLMs) in Plain English\n\nLLMs, or Large Language Models, are advanced AI systems that understand and generate human language. Here's how they work in simple terms:\n\n## What are LLMs?\n- They're massive computer programs trained on enormous amounts of text from books, articles, websites, and other sources\n- Think of them as incredibly sophisticated pattern-recognition systems for language\n\n## How do they work?\n- LLMs learn by analyzing billions or even trillions of examples of human writing\n- They identify patterns in how words appear together and in what contexts\n- When you ask a question, they predict what text should come next based on all the patterns they've learned\n\n## What makes them \"large\"?\n- The \"large\" refers to their size - they contain billions or trillions of adjustable parameters\n- These parameters are like knobs that get fine-tuned during training\n- More parameters generally allow for more nuanced understanding of language\n\n## What can LLMs do?\n- Generate human-like text responses to questions\n- Summarize information\n- Translate between languages\n- Write different types of content (essays, stories, code)\n- Have conversations that seem natural\n\n## Limitations\n- They don't truly \"understand\" meaning the way humans do\n- They can make up information (hallucinate)\n- They don't have real-world experience or consciousness\n- They reflect biases present in their training data\n\nI'm an example of an LLM - I was created by Anthropic and trained on a wide variety of texts to be helpful, harmless, and honest in my interactions."
}
],
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 445,
"cache_creation_input_tokens": 0,
"cache_read_input_tokens": 0,
"output_tokens": 357
}
}Last updated