You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Codex CLI supports a rich set of configuration options, with preferences stored in `~/.codex/config.toml`. For full configuration options, see [Configuration](./docs/config.md).
71
71
72
+
### Using Codex with LM Studio
73
+
74
+
Codex can run fully locally by delegating inference to [LM Studio](https://lmstudio.ai/).
75
+
76
+
1. Launch LM Studio and enable the **Local Inference Server** (Preferences → Developer).
77
+
2. Start any LM Studio model from the **My Models** tab. Codex looks for the model identifier exposed by the LM Studio server.
78
+
3. Run Codex with the LM Studio backend:
79
+
80
+
```shell
81
+
# Interactive session using the default LLaMA 3.1 8B Instruct model
82
+
codex --backend lmstudio
83
+
84
+
# Explicitly pick one of the supported architectures
85
+
codex --backend lmstudio --model qwen3
86
+
codex exec --backend lmstudio --model qwen3-moe "summarize this repo"
87
+
```
88
+
89
+
Codex understands the following architecture aliases when `--backend lmstudio` is selected:
You can also pass the exact LM Studio identifier (for example `my-org/custom-model`) if you are running a different checkpoint. Codex verifies that the requested model is available from LM Studio and surfaces clear errors when it is not.
0 commit comments