Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 24 additions & 34 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ With the CLI on your `PATH`, continue with:
```bash
pdd setup
```
The command installs tab completion, walks you through API key entry, and seeds local configuration files.
The command detects agentic CLI tools, scans for API keys, configures models, and seeds local configuration files.
If you postpone this step, the CLI detects the missing setup artifacts the first time you run another command and shows a reminder banner so you can complete it later (the banner is suppressed once `~/.pdd/api-env` exists or when your project already provides credentials via `.env` or `.pdd/`).

### Alternative: pip Installation
Expand Down Expand Up @@ -167,7 +167,7 @@ For CLI enthusiasts, implement GitHub issues directly:

2. **One Agentic CLI** - Required to run the workflows (install at least one):
- **Claude Code**: `npm install -g @anthropic-ai/claude-code` (requires `ANTHROPIC_API_KEY`)
- **Gemini CLI**: `npm install -g @google/gemini-cli` (requires `GOOGLE_API_KEY`)
- **Gemini CLI**: `npm install -g @google/gemini-cli` (requires `GOOGLE_API_KEY` or `GEMINI_API_KEY`)
- **Codex CLI**: `npm install -g @openai/codex` (requires `OPENAI_API_KEY`)

**Usage:**
Expand Down Expand Up @@ -222,21 +222,28 @@ If you want to understand PDD fundamentals, follow this manual example to see it

### Post-Installation Setup (Required first step after installation)

Run the guided setup:
Run the comprehensive setup wizard:
```bash
pdd setup
```

This wraps the interactive bootstrap utility to install shell tab completion, capture your API keys, create ~/.pdd configuration files, and write the starter prompt. Re-run it any time to update keys or reinstall completion.
The setup wizard runs these steps:
1. Detects agentic CLI tools (Claude, Gemini, Codex) and offers installation and API key configuration if needed
2. Scans for API keys across `.env`, and `~/.pdd/api-env.*`, and the shell environment; prompts to add one if none are found
3. Configures models from a reference CSV `data/llm_model.csv` of top models (ELO ≥ 1400) across all LiteLLM-supported providers based on your available keys
4. Optionally creates a `.pddrc` project config
5. Tests the first available model with a real LLM call
6. Prints a structured summary (CLIs, keys, models, test result)

If you skip this step, the first regular pdd command you run will detect the missing setup files and print a reminder banner so you can finish onboarding later.
The wizard can be re-run at any time to update keys, add providers, or reconfigure settings.

Reload your shell so the new completion and environment hooks are available:
```bash
source ~/.zshrc # or source ~/.bashrc / fish equivalent
```
> **Important:** After setup completes, source the API environment file so your keys take effect in the current terminal session:
> ```bash
> source ~/.pdd/api-env.zsh # or api-env.bash, depending on your shell
> ```
> New terminal windows will load keys automatically.

👉 If you prefer to configure things manually, see [SETUP_WITH_GEMINI.md](SETUP_WITH_GEMINI.md) for full instructions on obtaining a Gemini API key and creating your own `~/.pdd/llm_model.csv`.
If you skip this step, the first regular pdd command you run will detect the missing setup files and print a reminder banner so you can finish onboarding later.

5. **Run Hello**:
```bash
Expand Down Expand Up @@ -321,28 +328,6 @@ For a concrete, up-to-date reference of supported models and example rows, see t

For proper model identifiers to use in your custom configuration, refer to the [LiteLLM Model List](https://docs.litellm.ai/docs/providers) documentation. LiteLLM typically uses model identifiers in the format `provider/model_name` (e.g., "openai/gpt-4", "anthropic/claude-3-opus-20240229").

## Post-Installation Setup

1. Run the guided setup (required unless you do this manually or use the cloud):
```bash
pdd setup
```
This wraps the interactive bootstrap utility to install shell tab completion, capture your API keys, create `~/.pdd` configuration files, and write the starter prompt. Re-run it any time to update keys or reinstall completion.
If you skip this step, the first regular `pdd` command you run will detect the missing setup files and print a reminder banner so you can finish onboarding later (the banner is suppressed once `~/.pdd/api-env` exists or when your project already provides credentials via `.env` or `.pdd/`).

2. Reload your shell so the new completion and environment hooks are available:
```bash
source ~/.zshrc # or source ~/.bashrc / fish equivalent
```

3. Configure environment variables (optional):
```bash
# Add to .bashrc, .zshrc, or equivalent
export PDD_AUTO_UPDATE=true
export PDD_GENERATE_OUTPUT_PATH=/path/to/generated/code/
export PDD_TEST_OUTPUT_PATH=/path/to/tests/
```

## Troubleshooting Common Installation Issues

1. **Command not found**
Expand Down Expand Up @@ -1853,7 +1838,7 @@ For the agentic fallback to function, you need to have at least one of the suppo
* Requires the `ANTHROPIC_API_KEY` environment variable to be set.
2. **Google Gemini:**
* Requires the `gemini` CLI to be installed and in your `PATH`.
* Requires the `GOOGLE_API_KEY` environment variable to be set.
* Requires the `GOOGLE_API_KEY` or `GEMINI_API_KEY` environment variable to be set.
3. **OpenAI Codex/GPT:**
* Requires the `codex` CLI to be installed and in your `PATH`.
* Requires the `OPENAI_API_KEY` environment variable to be set.
Expand Down Expand Up @@ -2799,13 +2784,18 @@ The `.pddrc` approach is recommended for team projects as it ensures consistent

### Model Configuration (`llm_model.csv`)

PDD uses a CSV file (`llm_model.csv`) to store information about available AI models, their costs, capabilities, and required API key names. When running commands locally (e.g., using the `update_model_costs.py` utility or potentially local execution modes if implemented), PDD determines which configuration file to use based on the following priority:
PDD uses a CSV file (`llm_model.csv`) to store information about available AI models, their costs, capabilities, and required API key names.

When running commands locally, PDD determines which configuration file to use based on the following priority:

1. **User-specific:** `~/.pdd/llm_model.csv` - If this file exists, it takes precedence over any project-level configuration. This allows users to maintain a personal, system-wide model configuration.
2. **Project-specific:** `<PROJECT_ROOT>/.pdd/llm_model.csv` - If the user-specific file is not found, PDD looks for the file within the `.pdd` directory of the determined project root (based on `PDD_PATH` or auto-detection).
3. **Package default:** If neither of the above exist, PDD falls back to the default configuration bundled with the package installation.

This tiered approach allows for both shared project configurations and individual user overrides, while ensuring PDD works out-of-the-box without requiring manual configuration.

**Note:** You can manually edit this CSV, but running `pdd setup` again is the recommended way to add providers and update models.

*Note: This file-based configuration primarily affects local operations and utilities. Cloud execution modes likely rely on centrally managed configurations.*


Expand Down
31 changes: 23 additions & 8 deletions SETUP_WITH_GEMINI.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,14 +60,29 @@ Right after installation, let PDD bootstrap its configuration:
pdd setup
```

During the wizard:
- Choose **Install tab completion** if you want shell helpers.
- Pick **Google Gemini** when asked which providers to configure.
- Paste your Gemini API key when prompted (you can create it in the next step if you haven’t already).

The wizard writes your credentials to `~/.pdd/api-env`, seeds `~/.pdd/llm_model.csv` with Gemini entries, and reminds you to reload your shell (`source ~/.zshrc`, etc.) so completion and env hooks load.

If you prefer to configure everything manually—or you’re on an offline machine—skip the wizard and follow the manual instructions below.
The setup wizard runs these steps:
1. Detects agentic CLI tools (Claude, Gemini, Codex) and offers installation and API key configuration if needed
2. Scans for API keys across `.env`, and `~/.pdd/api-env.*`, and the shell environment; prompts to add one if none are found
3. Configures models from a reference CSV `data/llm_model.csv` of top models (ELO ≥ 1400) across all LiteLLM-supported providers based on your available keys
4. Optionally creates a `.pddrc` project config
5. Tests the first available model with a real LLM call
6. Prints a structured summary (CLIs, keys, models, test result)

When adding your Gemini API key:
- Select Gemini CLI as one of the agentic CLI tools
- The wizard will detect that `GEMINI_API_KEY` is missing
- Paste your API key when prompted (you can create it in the next step if you haven't already)
- The wizard tests it immediately and confirms it works

The wizard writes your credentials to `~/.pdd/api-env.zsh` (or `.bash`) and updates `llm_model.csv` with your selected models.

> **Important:** After setup completes, source the API environment file so your keys take effect in the current terminal session:
> ```bash
> source ~/.pdd/api-env.zsh # or api-env.bash, depending on your shell
> ```
> New terminal windows will load keys automatically.

If you prefer to configure everything manually—or you're on an offline machine—skip the wizard and follow the manual instructions below.

---

Expand Down
50 changes: 50 additions & 0 deletions context/api_key_scanner_example.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
from __future__ import annotations

import sys
from pathlib import Path

# Add the project root to sys.path
project_root = Path(__file__).resolve().parent.parent
sys.path.append(str(project_root))

from pdd.api_key_scanner import scan_environment, get_provider_key_names, KeyInfo


def main() -> None:
"""
Demonstrates how to use the api_key_scanner module to:
1. Discover all API key variable names from the user's ~/.pdd/llm_model.csv
2. Scan multiple sources (shell env, .env file, ~/.pdd/api-env.*)
3. Report existence and source without storing key values

Note: The scanner reads from the user's configured models, not a hardcoded
master list. If no models have been added via `pdd setup`, both functions
return empty results.
"""

# Get all provider key names from the user's configured CSV
all_keys = get_provider_key_names()
print(f"Provider key names from user CSV: {all_keys}\n")

if not all_keys:
print("No models configured yet. Use `pdd setup` to add providers.")
return

# Scan the environment for all API keys
print("Scanning environment for API keys...\n")
scan_results = scan_environment()

# Display results — note: KeyInfo only has source and is_set, no value
for key_name, key_info in scan_results.items():
if key_info.is_set:
print(f" {key_name:25s} ✓ Found ({key_info.source})")
else:
print(f" {key_name:25s} — Not found")

found = sum(1 for k in scan_results.values() if k.is_set)
missing = sum(1 for k in scan_results.values() if not k.is_set)
print(f"\nFound: {found} Missing: {missing}")


if __name__ == "__main__":
main()
48 changes: 48 additions & 0 deletions context/cli_detector_example.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
from __future__ import annotations

import sys
from pathlib import Path

# Add the project root to sys.path
project_root = Path(__file__).resolve().parent.parent
sys.path.append(str(project_root))

from pdd.cli_detector import detect_and_bootstrap_cli, detect_cli_tools, CliBootstrapResult


def main() -> None:
"""
Demonstrates how to use the cli_detector module to:
1. Bootstrap agentic CLIs for pdd setup (detect_and_bootstrap_cli)
2. Detect installed CLI harnesses (claude, codex, gemini)
3. Cross-reference with available API keys
4. Offer installation for missing CLIs
"""

# Primary entry point used by pdd setup Phase 1:
# results = detect_and_bootstrap_cli() # Returns List[CliBootstrapResult]
# for r in results:
# r.cli_name -> "claude" | "codex" | "gemini" | ""
# r.provider -> "anthropic" | "openai" | "google" | ""
# r.cli_path -> "/usr/local/bin/claude" | ""
# r.api_key_configured -> True | False
# r.skipped -> True | False

# Legacy function for detection only:
# detect_cli_tools() # Uncomment to run interactively

# Example flow (detect_and_bootstrap_cli with multi-select):
# Checking CLI tools...
#
# 1. Claude CLI ✓ Found at /usr/local/bin/claude ✓ ANTHROPIC_API_KEY is set
# 2. Codex CLI ✗ Not found ✗ OPENAI_API_KEY not set
# 3. Gemini CLI ✗ Not found ✓ GEMINI_API_KEY is set
#
# Select CLIs to use for pdd agentic tools (enter numbers separated by commas, e.g., 1,3):
#
# Returns [CliBootstrapResult(cli_name="claude", ...), CliBootstrapResult(cli_name="gemini", ...)]
pass


if __name__ == "__main__":
main()
44 changes: 44 additions & 0 deletions context/model_tester_example.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
from __future__ import annotations

import sys
from pathlib import Path

# Add the project root to sys.path
project_root = Path(__file__).resolve().parent.parent
sys.path.append(str(project_root))

from pdd.model_tester import test_model_interactive


def main() -> None:
"""
Demonstrates how to use the model_tester module to:
1. List configured models from ~/.pdd/llm_model.csv
2. Test a selected model via litellm.completion()
3. Display diagnostics (API key status, timing, cost)
"""

# Run the interactive tester
# test_model_interactive() # Uncomment to run interactively

# Example flow:
# Configured models:
# 1. anthropic/claude-haiku-4-5-20251001 ANTHROPIC_API_KEY
# 2. gpt-5-nano OPENAI_API_KEY
# 3. lm_studio/openai-gpt-oss-120b-mlx-6 (local)
#
# Test which model? 1
# Testing anthropic/claude-haiku-4-5-20251001...
# API key ANTHROPIC_API_KEY ✓ Found (shell environment)
# LLM call ✓ OK (0.3s, $0.0001)
#
# Test which model? 3
# Testing lm_studio/openai-gpt-oss-120b-mlx-6...
# API key (local — no key required)
# Base URL http://localhost:1234/v1
# LLM call ✗ Connection refused (localhost:1234)
pass


if __name__ == "__main__":
main()
42 changes: 42 additions & 0 deletions context/pddrc_initializer_example.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
from __future__ import annotations

import sys
from pathlib import Path

# Add the project root to sys.path
project_root = Path(__file__).resolve().parent.parent
sys.path.append(str(project_root))

from pdd.pddrc_initializer import _build_pddrc_content, _detect_language


def main() -> None:
"""
Demonstrates how to use the pddrc_initializer module.

The primary entry points are:
- _detect_language(cwd): returns "python", "typescript", "go", or None
- _build_pddrc_content(language): returns YAML string for .pddrc
- offer_pddrc_init(): interactive flow with YAML preview + confirmation

In practice, `pdd setup` imports _detect_language and _build_pddrc_content
directly for a streamlined flow (no YAML preview).
"""

# Detect language from marker files in cwd
from pathlib import Path
language = _detect_language(Path.cwd())
print(f"Detected language: {language}") # e.g. "python" or None

# Build .pddrc content for a given language
content = _build_pddrc_content(language or "python")
print(content)

# Or use the full interactive flow (shows YAML preview, asks for confirmation):
# from pdd.pddrc_initializer import offer_pddrc_init
# was_created = offer_pddrc_init()
pass


if __name__ == "__main__":
main()
Loading