Skip to content

pdd setup: Expand to all LLM providers and improve API key management #480

@niti-go

Description

@niti-go

pdd setup Overhaul — Comprehensive System Setup

Context

pdd setup (setup_tool.py) was written as an early hack — only allows users to configure 3 providers, using raw HTTP requests instead of the full LLM pipeline (like llm_invoke), and unaware of agentic CLI harnesses, local LLMs, or .pddrc. Managing models requires manually editing the llm_model.csv file. The goal is to make it a comprehensive menu that scans the user's full environment and guides them through configuration.

Later, we could add an agentic layer on top of this to make the menu flow easier for new users. For now, this implements much-needed core functionality for managing LLM providers and models.

This issue supersedes the setup-related portions of Issue #100 and PR #123.

New Flow

Setup auto-scans the environment for API keys (fast, no API calls — just checks existence and source), then presents a menu. The user picks what they need — no walking through every provider sequentially.

═══════════════════════════════════════════════════════
Scanning for API keys...
═══════════════════════════════════════════════════════

ANTHROPIC_API_KEY    ✓ Found  (shell environment)
OPENAI_API_KEY            ✓ Found  (.env file)
GROQ_API_KEY               ✓ Found  (shell environment)
GEMINI_API_KEY             — Not found
FIREWORKS_API_KEY     — Not found

Models configured: 12 (from 3 API keys + 1 local)

What would you like to do?
  1. Add a provider
  2. Remove models
  3. Test a model
  4. Detect CLI tools
  5. Initialize .pddrc
  6. Done

After any option (1–5), the user returns to this menu with an updated scan.

Changes

1. Dynamic provider discovery from CSV

Currently setup only looks for 3 hardcoded key names. Instead, read all unique api_key env var names from llm_model.csv and scan for all of them. This covers Fireworks, Groq, Vertex AI, and any future provider added to the CSV without code changes.

2. Discover keys from all sources

Currently setup only checks shell env vars via os.getenv(). It should also check .env files to match what llm_invoke does at runtime. The hierarchy is .env files first, then shell env vars. The scan shows where each key comes from so the user has full transparency. The scan only checks existence of API keys — it does not make API calls or validate keys.

3. Smart key storage

Currently setup forces all keys into api-env.{shell}. New rule:

  • Key entered during setup → save to api-env.{shell}
  • Key already in the environment → don't save (avoids duplicating keys managed by Infisical, .env, shell profile, etc.)
Saving keys...
  GROQ_API_KEY             → saved to ~/.pdd/api-env.zsh (entered during setup)
  GEMINI_API_KEY           → saved to ~/.pdd/api-env.zsh (entered during setup)
  ANTHROPIC_API_KEY  → skipped (already in shell environment)
  OPENAI_API_KEY          → skipped (already in .env file)

4. Add a provider (menu option 1)

Opens a sub-menu with three paths:

4a. Enter an API key

For providers that already exist in the master llm_model.csv (Anthropic, OpenAI, Google, Groq, Fireworks, etc.). The user enters or replaces a key, and all models for that provider are automatically loaded into the user's ~/.pdd/llm_model.csv. No interactive model selection — the user gets the full range by default and can trim later via "Remove models."

Missing API keys:

1. GEMINI_API_KEY
2. FIREWORKS_API_KEY

Or enter a key name directly.
Choice: 1

Enter GEMINI_API_KEY: AIza...
✓ Saved to ~/.pdd/api-env.zsh
✓ Loaded 3 Google models into llm_model.csv

If the key already exists (user wants to replace an expired key), update the value in api-env.{shell} without re-adding model rows.

If the key name doesn't match any existing CSV rows, tell them to use "Add a custom provider" instead.

4b. Add a local LLM

Local models don't need API keys — they need a base_url and model name. Setup guides through this and auto-detects where possible (e.g., querying Ollama's API for installed models).

What tool are you using?
  1. LM Studio (default: localhost:1234)
  2. Ollama (default: localhost:11434)
  3. Other (custom base URL)
  Choice: 2

Querying Ollama at http://localhost:11434...
Found installed models:
  1. llama3:70b
  2. codellama:34b
  3. mistral:7b

Which models do you want to add? [1,2,3]: 1,2
✓ Added ollama_chat/llama3:70b and ollama_chat/codellama:34b to llm_model.csv
4c. Add a custom provider

For any LiteLLM-compatible provider not in the master CSV (Together AI, Deepinfra, Mistral, etc.). Asks a few questions and writes the row to llm_model.csv so users never edit the CSV manually.

Provider prefix (e.g. together_ai, deepinfra, mistral): together_ai
Model name: meta-llama/Llama-3-70b-chat
API key env var name: TOGETHERAI_API_KEY
Base URL (press Enter if standard):
Cost per 1M input tokens (optional): 0.90
Cost per 1M output tokens (optional): 0.90

✓ Added to llm_model.csv

5. Remove models (menu option 2)

Opens a sub-menu with two modes to handle both bulk and individual removal:

5a. Remove all models for a provider

For when the user wants to stop using an entire provider. Removes all model rows for that API key from llm_model.csv and comments out the key in api-env.{shell} (never deletes — keys are painful to regenerate).

Remove all models for:

1. ANTHROPIC_API_KEY (3 models)
2. OPENAI_API_KEY (5 models)
3. GROQ_API_KEY (1 model)
Provider: 3
# Commented out by pdd setup on 2026-02-14
# export GROQ_API_KEY='gsk_abc...'
5b. Remove individual models

For trimming specific models (e.g., removing an expensive tier) or removing a local LLM that has no API key to group by.

Your models:

1. anthropic/claude-haiku-4-5-20251001 ANTHROPIC_API_KEY
2. anthropic/claude-sonnet-4-5-20250929 ANTHROPIC_API_KEY
3. anthropic/claude-opus-4-5-20251101 ANTHROPIC_API_KEY
4. gpt-5-nano OPENAI_API_KEY
5. lm_studio/openai-gpt-oss-120b-mlx-6 (local)
Remove which? (comma-separated): 3,5

✓ Removed 2 models from llm_model.csv

6. Test a model (menu option 3)

Tests a single model by making one litellm.completion() call with a minimal prompt. Only runs when the user explicitly chooses it — no surprise API costs. This tests the full LiteLLM stack for that model: key presence, authentication, model name correctness, base_url connectivity, and provider routing.

Uses litellm.completion() directly (not llm_invoke because llm_invoke doesn’t let you choose a specific model or key to use). The API key is passed directly via the api_key parameter.

Configured models:
  1. anthropic/claude-haiku-4-5-20251001 ANTHROPIC_API_KEY
  2. anthropic/claude-sonnet-4-5-20250929 ANTHROPIC_API_KEY
  3. gpt-5-nano OPENAI_API_KEY ✓ OK (0.2s, $0.0001)
  4. lm_studio/openai-gpt-oss-120b-mlx-6 (local)

Test which model? 4
Testing lm_studio/openai-gpt-oss-120b-mlx-6...
API key    (local — no key required)
Base URL   http://localhost:1234/v1
LLM call   ✗ Connection refused (localhost:1234)

Test which model? 1
Testing anthropic/claude-haiku-4-5-20251001...
API key    ANTHROPIC_API_KEY ✓ Found (shell environment)
LLM call   ✓ OK (0.3s, $0.0001)

7. Detect CLI tools (menu option 4)

Leverage existing get_available_agents() from agentic_common.py to detect installed agentic CLI harnesses. Currently users only discover missing harnesses when pdd fix/pdd change/pdd bug fails.

Checking CLI tools...
(Required for: pdd fix, pdd change, pdd bug)

Claude CLI   ✓ Found at /usr/local/bin/claude
Codex CLI    ✗ Not found
Gemini CLI   ✗ Not found

You have OPENAI_API_KEY but Codex CLI is not installed.
Install with: npm install -g @openai/codex
Install now? [y/N]

8. Initialize .pddrc (menu option 5)

Offer to create a basic .pddrc in the current project directory if one doesn't exist. Sets sensible defaults for output paths, language, and strength/temperature.

No .pddrc found in current project.

Would you like to create one with default settings?
  Default language: python
  Output path: pdd/
  Test output path: tests/

Create .pddrc? [Y/n]

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions