Latency and Performance
Understanding OneRouter's performance characteristics.
OneRouter is designed with performance as a top priority. OneRouter is heavily optimized to add as little latency as possible to your requests.
Base Latency
Under typical production conditions, OneRouter adds approximately 100ms of latency to your requests. This minimal overhead is achieved through:
Edge computing using Cloudflare Workers to stay as close as possible to your application
Efficient caching of user and API key data at the edge
Optimized routing logic that minimizes processing time
Performance Considerations
Cache Warming
When OneRouter's edge caches are cold (typically during the first 5 minutes of operation in a new region), you may experience slightly higher latency as the caches warm up. This normalizes once the caches are populated.
Credit Balance Checks
To maintain accurate billing and prevent overages, OneRouter performs additional database checks when:
A user's credit balance is low (single digit dollars)
OneRouter expires caches more aggressively under these conditions to ensure proper billing, which increases latency until additional credits are added.
Model Fallback
When using provider routing, if the primary model or provider fails, OneRouter will automatically try the next option. A failed initial completion unsurprisingly adds latency to the specific request. OneRouter tracks provider failures, and will attempt to intelligently route around unavailable providers so that this latency is not incurred on every request.
Best Practices
To achieve optimal performance with OneRouter:
Maintain Healthy Credit Balance
Recommended minimum balance: $50-100 to ensure smooth operation
Use Provider Preferences
If you have specific latency requirements (whether time to first token, or time to last), there are provider routing features to help you achieve your performance and cost goals.
Last updated