OpenAI Codex CLI
Codex CLI is a terminal-based coding agent that combines local execution with cloud AI capabilities. Unlike code generation tools that only produce code snippets, Codex CLI can understand your entire project, execute the code it creates, debug issues, and iterate until solutions work correctly.
How to Access OneRouter
Installation
Install via npm (Recommended)
npm install -g @openai/codex
Install via Homebrew (macOS)
brew install codex
Verify Installation
codex --version
Configuring OneRouter AI Models
Setup Configuration File
Codex CLI uses a TOML configuration file located at:
macOS/Linux:
~/.codex/config.tomlWindows:
%USERPROFILE%\.codex\config.toml
Basic Configuration Template
model = "gpt-5.1-chat"
model_provider = "onerouter"
[model_providers.onerouter]
name = "OneRouter"
base_url = "https://llm.onerouter.pro/v1"
http_headers = {"Authorization" = "Bearer YOUR_ONEROUTER_API_KEY"}
wire_api = "chat"Getting Started
Launch Codex CLI
codex



Basic Usage Examples
Code Generation:
> Create a Python class for handling REST API responses with error handling

Project Analysis:
> Review this codebase and suggest improvements for performance
Bug Fixing:
> Fix the authentication error in the login function
Conclusion
Codex CLI with OneRouter AI models provides a powerful, flexible development environment that combines local control with cloud AI capabilities. By choosing the right model for each task and configuring your environment properly, you can significantly accelerate your development workflow while maintaining code quality and security.
Last updated