OneRouter
Ctrlk
  • Overview
    • 🚂Welcome to OneRouter
    • Introduction
    • Models
    • FAQ
    • Pricing & Fees
      • Pricing and Fee Structure
      • Costs breakdown logs
  • API Reference
    • LLM Model API
    • Generative Model API
    • Universal API
    • Billing API
    • Authentication
    • Errors
    • Search Engine API
      • Tavily
  • Features
    • Privacy and Logging
    • Provider Routing
    • Model Routing
    • Latency and Performance
    • Performance Monitoring & Analysis
    • Rate Limits
    • LLM Model
      • Structured Outputs
      • Tool Calling
      • Prompt Caching
      • Multimodal LLM Model
    • Generative Model
  • Models Endpoints
    • Synchronous Requests
    • Queue API
  • Frameworks and Integrations
    • Overview
    • OpenAI SDK
    • LangChain
    • PydanticAI
    • Langfuse
    • n8n
  • Community
    • Discord
    • Linkedin
    • Twitter
    • Reddit
Powered by GitBook
On this page
  1. Features

LLM Model

LLM Model features

Structured OutputsTool CallingPrompt CachingMultimodal LLM Model
PreviousRate LimitsNextStructured Outputs