-
-
Notifications
You must be signed in to change notification settings - Fork 0
Config Overview
Coco's configuration controls AI provider settings, output behavior, file filtering, and more. It can be defined in multiple locations, each with a specific priority.
coco merges configuration from multiple sources. Higher-priority sources override lower ones:
- Command Line Flags — highest priority, useful for one-off changes
-
Environment Variables —
COCO_*prefixed variables (see below) -
Project Config —
.coco.jsonor.coco.config.jsonin your project root -
Git Profile —
[coco]section in~/.gitconfig -
XDG Config —
$XDG_CONFIG_HOME/coco/config.json(defaults to~/.config/coco/config.json) - Built-in Defaults — sensible defaults for all settings
Run coco doctor to see which sources are active and where your config is loaded from.
Coco looks for .coco.json first, then falls back to .coco.config.json for backward
compatibility. New projects should use .coco.json — it's shorter and follows the dotfile
convention. Both filenames work identically.
| Option | Type | Default | Description |
|---|---|---|---|
mode |
"stdout" | "interactive"
|
"stdout" |
Output destination for generated results |
verbose |
boolean |
false |
Enable verbose logging |
defaultBranch |
string |
"main" |
Default git branch for the repository |
| Option | Type | Default | Description |
|---|---|---|---|
conventionalCommits |
boolean |
false |
Generate commit messages in Conventional Commits format |
includeBranchName |
boolean |
true |
Include current branch name in commit prompt for context |
openInEditor |
boolean |
false |
Open commit message in editor for editing before proceeding |
| Option | Type | Default | Description |
|---|---|---|---|
ignoredFiles |
string[] |
["package-lock.json", ".gitignore", ".ignore"] |
File paths or patterns to ignore (uses minimatch) |
ignoredExtensions |
string[] |
[".map", ".lock"] |
File extensions to ignore during processing |
| Option | Type | Default | Description |
|---|---|---|---|
prompt |
string |
- | Custom prompt text for generating results |
summarizePrompt |
string |
- | Custom prompt for summarizing large files |
| Option | Type | Default | Description |
|---|---|---|---|
autoFixTool |
string |
- | AI CLI tool for auto-fixing review issues (e.g. "codex", "claude", "gemini"). When unset, auto-fix is disabled. |
autoFixToolOptions |
Record<string, string> |
- | Extra flags passed to the auto-fix CLI tool. Keys are flag names without leading dashes. |
Example:
{
"autoFixTool": "codex",
"autoFixToolOptions": {
"model": "o4-mini",
"approval-mode": "auto-edit"
}
}The logTui object configures the full-screen coco log -i experience.
| Option | Type | Default | Description |
|---|---|---|---|
logTui.theme.preset |
"default" | "monochrome" | "catppuccin" | "gruvbox"
|
"default" |
Theme preset for the interactive log TUI |
logTui.theme.borderStyle |
"round" | "single" | "classic"
|
"round" |
Panel border style, with classic used for ASCII-friendly terminals |
logTui.theme.ascii |
boolean |
auto-detected | Force ASCII-compatible borders |
logTui.theme.colors |
object |
preset values | Semantic color token overrides |
logTui.idleTips |
boolean |
false |
Rotate short usage tips through the status line after ~10s of idle. Off by default so the status line stays quiet for users who prefer minimal chrome; flip on if you're new to the chord model and want the prompts. |
Example:
{
"logTui": {
"theme": {
"preset": "catppuccin",
"borderStyle": "round",
"colors": {
"accent": "#89b4fa",
"focusBorder": "#89dceb",
"selection": "#45475a"
}
}
}
}NO_COLOR=1 overrides color settings and keeps the TUI readable in monochrome. Terminals such as TERM=dumb use the ASCII-safe rendering path.
The service object configures the AI provider and model settings:
| Option | Type | Default | Description |
|---|---|---|---|
service.provider |
"openai" | "anthropic" | "ollama"
|
- | AI provider to use |
service.model |
string | "dynamic"
|
- | Model name, or "dynamic" for task-based routing |
service.tokenLimit |
number |
4096 |
Maximum tokens per request (default raised from 2048 to match the canonical service configs in langchain/utils.ts) |
service.temperature |
number |
0.4 |
Randomness (0.0–1.0). Lower is more deterministic. |
service.maxConcurrent |
number |
6 |
Maximum concurrent requests |
service.minTokensForSummary |
number |
400 |
Minimum token count for a file group to be eligible for summarization |
service.maxFileTokens |
number |
25% of tokenLimit
|
Maximum tokens for a single file diff before pre-summarization |
service.maxParsingAttempts |
number |
3 |
Maximum schema parsing retry attempts (increase for Ollama) |
service.baseURL |
string |
provider default | Custom base URL for OpenAI-compatible APIs (e.g. OpenRouter, Azure) |
service.endpoint |
string |
http://localhost:11434 |
Ollama server URL (Ollama only) |
service.authentication |
object |
- | Authentication config (see below) |
service.requestOptions.timeout |
number |
- | Request timeout in milliseconds |
service.requestOptions.maxRetries |
number |
- | Maximum request retries |
service.fields |
object |
- | Provider-specific extra options forwarded to the LangChain client |
The diff-condensing pipeline ships several lossless optimizations on by default — content-hash cache, trivial-shape skip for pure additions / deletions / renames / binary diffs, sort discipline, rate-limit retries. None of these change the output detail; they just avoid redundant work.
service.fastPath exposes opt-in lossy optimizations that trade summary detail for speed. They are off by default. When you enable one, you accept that final commit messages on the affected file shapes may be blander than LLM-generated summaries — the templated extract names structural changes only.
| Option | Type | Default | Description |
|---|---|---|---|
service.fastPath.markdown |
boolean |
false |
Replace the LLM summary with a templated heading extract for .md / .mdx / .markdown modification diffs that have clear heading-level structural changes. Diffs without structural signals (paragraph-only edits) still go to the LLM regardless. |
When to enable fastPath.markdown:
- You commit doc changes frequently and want the speed win on docs-shaped commits.
- Your downstream commit-message style for docs is naturally structural (e.g. "docs: update OAuth section") and doesn't lean on LLM-generated detail.
- You're running on a slow / rate-limited model and the latency saved per markdown file is worth the loss in summary nuance.
When to leave it off (default):
- You want commit messages on doc commits to retain LLM-generated detail like "expanded the OAuth section with PKCE examples and clarified rate-limit behavior."
- Your team uses doc commit messages for changelogs or audit trails.
{
"service": {
"provider": "openai",
"model": "gpt-4.1-nano",
"fastPath": {
"markdown": true
}
}
}The templated summary format looks like:
Updated markdown `docs/configuration.md`. new sections: Authentication, Rate limits. updated sections: Setup. +85/-12 lines.
When service.model is set to "dynamic", Coco selects a task-appropriate model automatically.
See Dynamic Model Routing for the full guide.
| Option | Type | Default | Description |
|---|---|---|---|
service.dynamicModelPreference |
"cost" | "balanced" | "quality"
|
"balanced" |
Default routing preference |
service.dynamicModels |
object |
provider defaults | Per-task model overrides |
Supported task keys: summarize, commit, changelog, review, recap, repair, largeDiff.
{
"service": {
"provider": "openai",
"model": "dynamic",
"dynamicModelPreference": "balanced",
"dynamicModels": {
"summarize": "gpt-4.1-nano",
"commit": "gpt-4.1-mini",
"review": "gpt-4.1"
}
}
}Three authentication types are supported:
| Type | Use case | Required fields |
|---|---|---|
APIKey |
OpenAI, Anthropic, OpenRouter | credentials.apiKey |
OAuth |
OAuth-based providers |
credentials.clientId, credentials.clientSecret, credentials.token
|
None |
Ollama (local) | none |
OpenAI Configuration:
{
"service": {
"provider": "openai",
"model": "gpt-4o",
"tokenLimit": 2048,
"temperature": 0.4,
"maxConcurrent": 6,
"authentication": {
"type": "APIKey",
"credentials": {
"apiKey": "sk-..."
}
},
"requestOptions": {
"timeout": 30000,
"maxRetries": 3
},
"maxParsingAttempts": 3,
"fields": {
"topP": 1.0,
"frequencyPenalty": 0.0,
"presencePenalty": 0.0
}
}
}OpenAI via OpenRouter:
{
"service": {
"provider": "openai",
"model": "anthropic/claude-3.5-sonnet",
"baseURL": "https://openrouter.ai/api/v1",
"authentication": {
"type": "APIKey",
"credentials": {
"apiKey": "sk-or-v1-..."
}
}
}
}Anthropic Configuration:
{
"service": {
"provider": "anthropic",
"model": "claude-haiku-4-5-20251001",
"tokenLimit": 4096,
"temperature": 0.4,
"authentication": {
"type": "APIKey",
"credentials": {
"apiKey": "sk-ant-..."
}
},
"fields": {
"maxTokens": 2048
}
}
}Ollama Configuration:
{
"service": {
"provider": "ollama",
"model": "qwen2.5-coder:7b",
"endpoint": "http://localhost:11434",
"tokenLimit": 2048,
"temperature": 0.4,
"authentication": {
"type": "None"
},
"fields": {
"numCtx": 4096,
"numPredict": 2048
}
}
}Coco's diff-condensing pipeline is bounded summarization (input: a diff, output: 1-3 sentences). The fast / cheap tier of each provider is the right default for it; the larger models are worth picking only when summary quality is more important than latency or cost.
OpenAI Models:
-
gpt-4.1-nano(default — fastest / cheapest in the GPT-4.1 line) -
gpt-4.1-mini,gpt-4.1(step up if you want richer summaries) -
gpt-4o,gpt-4o-mini(older but still solid) -
gpt-4-turbo,gpt-4,gpt-3.5-turbo(legacy — pin only if you have a reason)
Anthropic Models:
-
claude-haiku-4-5-20251001(default — current fast tier; right pick for diff condensing) -
claude-haiku-4-5,claude-sonnet-4-6,claude-opus-4-7(current generation; pick the bigger model when quality matters more than speed) -
claude-sonnet-4-0(earlier 4.x line) -
claude-3-7-sonnet-latest,claude-3-5-sonnet-latest,claude-3-5-haiku-latest,claude-3-opus-20240229,claude-3-sonnet-20240229,claude-3-haiku-20240307(pre-4.x, kept for back-compat)
Ollama Models:
-
qwen2.5-coder:7b(recommended for code) -
llama3.1:8b(recommended general purpose) -
deepseek-r1:8b,deepseek-r1:32b -
codellama:7b,codellama:13b,codellama:34b -
llama3.2:1b,llama3.2:3b - And many more (see
src/lib/langchain/types.tsfor the full list)
Picking a model for
coco: the diff-condensing pipeline doesn't reason or chain-of-thought; it summarizes. Default to your provider's fast tier and step up only when you've eyeballed a few generated commits and decided the quality isn't there. The defaults change over time as providers refresh their lineups — the values shipped insrc/lib/langchain/utils.tsare the current recommendations.
The coco init command simplifies the process of generating and updating your config file. When you run coco init, you'll be guided through an interactive setup process where you can customize your installation. This command can:
- Create a new config file in your chosen location
- Update an existing config with new settings
- Help you manage configurations across different scopes (global or project-specific)
- Set up AI provider authentication
- Configure conventional commits and commitlint integration
Here's an example of a comprehensive .coco.json file:
{
"$schema": "https://git-co.co/schema.json",
"mode": "interactive",
"verbose": false,
"conventionalCommits": true,
"includeBranchName": true,
"openInEditor": false,
"defaultBranch": "main",
"logTui": {
"theme": {
"preset": "catppuccin",
"borderStyle": "round"
}
},
"ignoredFiles": [
"package-lock.json",
"yarn.lock",
"pnpm-lock.yaml",
"dist/*",
"build/*",
"node_modules/*"
],
"ignoredExtensions": [
".map",
".lock",
".min.js",
".min.css"
],
"service": {
"provider": "openai",
"model": "gpt-4o",
"tokenLimit": 4096,
"temperature": 0.3,
"maxConcurrent": 6,
"authentication": {
"type": "APIKey",
"credentials": {
"apiKey": "sk-..."
}
},
"requestOptions": {
"timeout": 60000,
"maxRetries": 3
},
"maxParsingAttempts": 3
}
}You can also set coco configurations in your .gitconfig file:
[user]
name = Your Name
email = your.email@example.com
# -- Start coco config --
[coco]
mode = interactive
conventionalCommits = true
defaultBranch = main
verbose = false
# Service configuration
serviceProvider = openai
serviceModel = gpt-4o
serviceApiKey = sk-...
serviceTokenLimit = 4096
serviceTemperature = 0.3
# -- End coco config --Set configuration options as environment variables. Coco reads COCO_* prefixed variables
and maps them to config fields:
# Core settings
export COCO_MODE=interactive
export COCO_VERBOSE=true
export COCO_CONVENTIONAL_COMMITS=true
export COCO_DEFAULT_BRANCH=main
# Service configuration
export COCO_SERVICE_PROVIDER=openai
export COCO_SERVICE_MODEL=gpt-4o
export COCO_SERVICE_API_KEY=sk-... # or use OPENAI_API_KEY
export COCO_SERVICE_TOKEN_LIMIT=4096
export COCO_SERVICE_TEMPERATURE=0.3
export COCO_SERVICE_BASE_URL=https://openrouter.ai/api/v1 # OpenAI-compatible endpoints
export COCO_SERVICE_ENDPOINT=http://localhost:11434 # Ollama only
# Dynamic model routing
export COCO_SERVICE_DYNAMIC_MODEL_PREFERENCE=balanced
export COCO_SERVICE_DYNAMIC_MODELS='{"summarize":"gpt-4.1-nano","commit":"gpt-4.1-mini"}'
# File processing (comma-separated)
export COCO_IGNORED_FILES="*.lock,dist/*,node_modules/*"
export COCO_IGNORED_EXTENSIONS=".map,.min.js,.min.css"Provider-specific API key environment variables (OPENAI_API_KEY, ANTHROPIC_API_KEY) are
also recognized and used as fallbacks when the config does not include an explicit key.
When using different configuration methods, follow these naming conventions:
| Config File | Environment Variable | Git Config | CLI Flag |
|---|---|---|---|
mode |
COCO_MODE |
coco.mode |
--mode |
conventionalCommits |
COCO_CONVENTIONAL_COMMITS |
coco.conventionalCommits |
--conventional |
includeBranchName |
COCO_INCLUDE_BRANCH_NAME |
coco.includeBranchName |
--include-branch-name |
service.provider |
COCO_SERVICE_PROVIDER |
coco.serviceProvider |
N/A |
service.model |
COCO_SERVICE_MODEL |
coco.serviceModel |
N/A |
service.apiKey |
COCO_SERVICE_API_KEY |
coco.serviceApiKey |
N/A |
service.baseURL |
COCO_SERVICE_BASE_URL |
coco.serviceBaseURL |
N/A |
service.endpoint |
COCO_SERVICE_ENDPOINT |
coco.serviceEndpoint |
N/A |
service.tokenLimit |
COCO_SERVICE_TOKEN_LIMIT |
coco.serviceTokenLimit |
N/A |
service.dynamicModelPreference |
COCO_SERVICE_DYNAMIC_MODEL_PREFERENCE |
N/A | N/A |
service.dynamicModels |
COCO_SERVICE_DYNAMIC_MODELS (JSON) |
N/A | N/A |
Run coco doctor to scan your configuration for common issues:
# Diagnose config problems
coco doctor
# Auto-fix what can be fixed (model upgrades, missing fields, etc.)
coco doctor --fix1. API Key Not Found
# Set via environment variable
export COCO_SERVICE_API_KEY=sk-...
# Or add to config file
{
"service": {
"authentication": {
"type": "APIKey",
"credentials": {
"apiKey": "sk-..."
}
}
}
}2. pnpm Compatibility Issues
# Update commitlint packages
pnpm add -D @commitlint/config-conventional@latest @commitlint/cli@latest
# Or disable commitlint validation
{
"conventionalCommits": false
}3. Model Not Available
# Check available models for your provider
coco init --scope project
# Or update to supported model
{
"service": {
"model": "gpt-4o" // Use supported model
}
}Use coco init to validate your configuration:
# Validate current configuration
coco init --scope project
# Test with verbose output
coco --verbose commit-
Use Project-Specific Configs: Keep
.coco.jsonin your project root for team consistency - Environment Variables for CI/CD: Use environment variables in automated environments
-
Git Config for Personal Settings: Use
.gitconfigfor personal preferences across projects - Secure API Keys: Never commit API keys to version control; use environment variables or git config
- Start Simple: Begin with basic configuration and add complexity as needed
-
Run
coco doctor: After upgrading coco or changing config, runcoco doctorto catch issues - Regular Updates: Keep your configuration updated with new features and model improvements