Codex CLI
OpenAI's Codex CLI, routed through BeansAI with one config file.
Overview
Codex reads ~/.codex/config.toml and ~/.codex/auth.json on startup. Define BeansAI as a custom model_provider — the CLI picks it up transparently.
Installation
shell
npm install -g @openai/codexConfiguration
~/.codex/config.toml
model_provider = "OpenAI"
model = "gpt-5.5"
review_model = "gpt-5.5"
model_reasoning_effort = "high"
disable_response_storage = true
network_access = "enabled"
model_context_window = 1000000
model_auto_compact_token_limit = 900000
approval_policy = "never"
sandbox_mode = "workspace-write"
suppress_unstable_features_warning = true
[shell_environment_policy]
inherit = "all"
[features]
image_generation = true
plan_tool = true
apply_patch_freeform = true
view_image_tool = true
hooks = true
memories = true
[windows]
sandbox = "unelevated"
[model_providers.OpenAI]
name = "OpenAI"
base_url = "https://api.beansai.dev/v1"
wire_api = "responses"
requires_openai_auth = true~/.codex/auth.json
{
"OPENAI_API_KEY": "sk-beans-..."
}macOS / Linux
bash
mkdir -p ~/.codex
cat > ~/.codex/config.toml <<'EOF'
model_provider = "OpenAI"
model = "gpt-5.5"
review_model = "gpt-5.5"
model_reasoning_effort = "high"
disable_response_storage = true
network_access = "enabled"
model_context_window = 1000000
model_auto_compact_token_limit = 900000
approval_policy = "never"
sandbox_mode = "workspace-write"
suppress_unstable_features_warning = true
[shell_environment_policy]
inherit = "all"
[features]
image_generation = true
plan_tool = true
apply_patch_freeform = true
view_image_tool = true
hooks = true
memories = true
[windows]
sandbox = "unelevated"
[model_providers.OpenAI]
name = "OpenAI"
base_url = "https://api.beansai.dev/v1"
wire_api = "responses"
requires_openai_auth = true
EOF
cat > ~/.codex/auth.json <<'EOF'
{
"OPENAI_API_KEY": "sk-beans-..."
}
EOF
echo "Codex CLI configured."Image Generation
BeansAI also supports image generation from Codex. Apply the image preset below, then ask Codex to generate or edit an image in plain language.
~/.codex/config.toml
model_provider = "OpenAI"
model = "gpt-5.5"
review_model = "gpt-5.5"
model_reasoning_effort = "high"
disable_response_storage = true
network_access = "enabled"
model_context_window = 1000000
model_auto_compact_token_limit = 900000
approval_policy = "never"
sandbox_mode = "workspace-write"
suppress_unstable_features_warning = true
[shell_environment_policy]
inherit = "all"
[features]
image_generation = true
plan_tool = true
apply_patch_freeform = true
view_image_tool = true
hooks = true
memories = true
[windows]
sandbox = "unelevated"
[model_providers.OpenAI]
name = "OpenAI"
base_url = "https://api.beansai.dev/v1"
wire_api = "responses"
requires_openai_auth = truebash
mkdir -p ~/.codex
cat > ~/.codex/config.toml <<'EOF'
model_provider = "OpenAI"
model = "gpt-5.5"
review_model = "gpt-5.5"
model_reasoning_effort = "high"
disable_response_storage = true
network_access = "enabled"
model_context_window = 1000000
model_auto_compact_token_limit = 900000
approval_policy = "never"
sandbox_mode = "workspace-write"
suppress_unstable_features_warning = true
[shell_environment_policy]
inherit = "all"
[features]
image_generation = true
plan_tool = true
apply_patch_freeform = true
view_image_tool = true
hooks = true
memories = true
[windows]
sandbox = "unelevated"
[model_providers.OpenAI]
name = "OpenAI"
base_url = "https://api.beansai.dev/v1"
wire_api = "responses"
requires_openai_auth = true
EOF
cat > ~/.codex/auth.json <<'EOF'
{
"OPENAI_API_KEY": "sk-beans-..."
}
EOF
echo "Codex CLI configured for image generation. Now run: codex \"生成一张赛博朋克风格的咖啡豆海报\""Windows (PowerShell)
powershell
$enc = [System.Text.UTF8Encoding]::new($false)
$bom = [char]0xFEFF
$dir = "$env:USERPROFILE\.codex"
if (!(Test-Path $dir)) { New-Item -ItemType Directory -Path $dir -Force | Out-Null }
$configContent = (@'
model_provider = "OpenAI"
model = "gpt-5.5"
review_model = "gpt-5.5"
model_reasoning_effort = "high"
disable_response_storage = true
network_access = "enabled"
model_context_window = 1000000
model_auto_compact_token_limit = 900000
approval_policy = "never"
sandbox_mode = "workspace-write"
suppress_unstable_features_warning = true
[shell_environment_policy]
inherit = "all"
[features]
image_generation = true
plan_tool = true
apply_patch_freeform = true
view_image_tool = true
hooks = true
memories = true
[windows]
sandbox = "unelevated"
[model_providers.OpenAI]
name = "OpenAI"
base_url = "https://api.beansai.dev/v1"
wire_api = "responses"
requires_openai_auth = true
'@).TrimStart($bom)
[System.IO.File]::WriteAllText("$dir\config.toml", $configContent, $enc)
$authContent = (@'
{
"OPENAI_API_KEY": "sk-beans-..."
}
'@).TrimStart($bom)
[System.IO.File]::WriteAllText("$dir\auth.json", $authContent, $enc)
Write-Host "Codex CLI configured."Basic usage
Run codex in any repo. The CLI defaults to gpt-5.5 via BeansAI; change model in config.toml to route to any other OpenAI model in our catalog.
Tips
- Set
disable_response_storage = trueso prompts stay off OpenAI's servers entirely. wire_api = "responses"uses the Responses API shape; BeansAI supports it natively end-to-end.- Keep review and main models identical to avoid cross-model diff surprises.