Skip to content

Commit eda0d21

Browse files
barckcodeclaude
andcommitted
Feat: Add OpenAI LLM provider support
- Add OpenAI client implementation with gpt-4o default model - Create LLM factory pattern for provider abstraction - Add command line flags for --provider and --model selection - Support environment variables for configuration (LLM_PROVIDER, OPENAI_MODEL, CLAUDE_MODEL) - Auto-detection of provider based on available API keys - Maintain backward compatibility with Claude as default - Update README with comprehensive configuration examples - Update Krew manifest to v0.1.3 with OpenAI support documentation Closes #1 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
1 parent 478efba commit eda0d21

7 files changed

Lines changed: 479 additions & 97 deletions

File tree

README.md

Lines changed: 77 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@
1212
***Actionable fixes** – concrete `kubectl` / `helm` commands you can copy-paste.
1313
***Understands the whole picture** – pods, deployments, services, CRDs, ingresses…
1414
***Human or machine output** – pretty terminal format, or JSON / YAML for automation.
15+
***Multiple LLM providers** – supports Claude (Anthropic) and OpenAI models.
1516

1617
---
1718

@@ -20,11 +21,18 @@
2021
### 1. Prerequisites
2122

2223
* Go 1.21+
23-
* An Anthropic **API key** exported as `ANTHROPIC_API_KEY`
24+
* An **API key** for your chosen LLM provider:
25+
- **Claude (Anthropic)**: `ANTHROPIC_API_KEY`
26+
- **OpenAI**: `OPENAI_API_KEY`
2427
* Access to the cluster you want to debug (via `kubectl` context)
2528

2629
```bash
30+
# For Claude (default)
2731
export ANTHROPIC_API_KEY="sk-..."
32+
33+
# For OpenAI
34+
export OPENAI_API_KEY="sk-..."
35+
export LLM_PROVIDER="openai" # Optional: auto-detects from API key
2836
```
2937

3038
---
@@ -89,6 +97,74 @@ kubectl ai debug "high memory usage" -n production --all
8997

9098
# Output as JSON
9199
kubectl ai debug "slow startup" -r deployment/api -o json
100+
101+
# Use specific LLM provider
102+
kubectl ai debug "networking issues" -r deployment/app --provider openai
103+
104+
# Use specific model
105+
kubectl ai debug "memory leaks" -r deployment/app --provider openai --model gpt-4o
106+
107+
# Use environment variables to set provider and model
108+
export LLM_PROVIDER="openai"
109+
export OPENAI_MODEL="gpt-4o-mini"
110+
kubectl ai debug "performance issues" -r deployment/app
111+
112+
# Override environment with command line flags
113+
kubectl ai debug "storage issues" -r deployment/app --provider claude --model claude-3-opus-20240229
114+
```
115+
116+
---
117+
118+
## 🔧 LLM Provider Configuration
119+
120+
### Claude (Anthropic) - Default
121+
122+
```bash
123+
export ANTHROPIC_API_KEY="sk-..."
124+
# Optional: specify model (default: claude-3-5-sonnet-20241022)
125+
export CLAUDE_MODEL="claude-3-5-sonnet-20241022"
126+
```
127+
128+
### OpenAI
129+
130+
```bash
131+
export OPENAI_API_KEY="sk-..."
132+
# Optional: specify model (default: gpt-4o)
133+
export OPENAI_MODEL="gpt-4o"
134+
# Optional: specify provider explicitly (auto-detects from API key if not set)
135+
export LLM_PROVIDER="openai"
136+
```
137+
138+
### Configuration Priority
139+
140+
1. **Command line flags** (`--provider`, `--model`) - highest priority
141+
2. **Environment variables** (`LLM_PROVIDER`, `OPENAI_MODEL`, `CLAUDE_MODEL`)
142+
3. **Auto-detection** - based on available API keys (Claude preferred if both available)
143+
144+
### Command Line Options
145+
146+
- `--provider`: Explicitly choose LLM provider (`claude`, `openai`)
147+
- `--model`: Override the default model for the selected provider
148+
- Auto-detection: If no provider is specified, the tool auto-detects based on available API keys
149+
150+
---
151+
152+
## 📋 Complete Command Reference
153+
154+
```bash
155+
kubectl ai debug PROBLEM [flags]
156+
157+
Flags:
158+
-h, --help help for debug
159+
--kubeconfig string path to kubeconfig file (default "~/.kube/config")
160+
--context string kubeconfig context (overrides current-context)
161+
-n, --namespace string kubernetes namespace (default "default")
162+
-r, --resource strings resources to analyze (e.g., deployment/nginx, pod/nginx-xxx)
163+
--all analyze all resources in the namespace
164+
-o, --output string output format (human, json, yaml) (default "human")
165+
-v, --verbose verbose output
166+
--provider string LLM provider (claude, openai). Defaults to auto-detect from env
167+
--model string LLM model to use (overrides default)
92168
```
93169

94170
---

cmd/debug.go

Lines changed: 54 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -6,24 +6,28 @@ import (
66
"strings"
77
"time"
88

9+
"path/filepath"
10+
911
"github.com/briandowns/spinner"
1012
"github.com/fatih/color"
11-
"github.com/spf13/cobra"
1213
"github.com/helmcode/kubectl-ai/pkg/analyzer"
1314
"github.com/helmcode/kubectl-ai/pkg/formatter"
1415
"github.com/helmcode/kubectl-ai/pkg/k8s"
16+
"github.com/helmcode/kubectl-ai/pkg/llm"
17+
"github.com/spf13/cobra"
1518
"k8s.io/client-go/util/homedir"
16-
"path/filepath"
1719
)
1820

1921
var (
20-
kubeconfig string
21-
namespace string
22-
kubeContext string
23-
resources []string
22+
kubeconfig string
23+
namespace string
24+
kubeContext string
25+
resources []string
2426
allResources bool
2527
outputFormat string
26-
verbose bool
28+
verbose bool
29+
llmProvider string
30+
llmModel string
2731
)
2832

2933
func NewDebugCmd() *cobra.Command {
@@ -54,24 +58,20 @@ Examples:
5458
}
5559

5660
cmd.Flags().StringVarP(&namespace, "namespace", "n", "default", "Kubernetes namespace")
57-
cmd.Flags().StringVar(&kubeContext, "context", "", "Kubeconfig context (overrides current-context)")
61+
cmd.Flags().StringVar(&kubeContext, "context", "", "Kubeconfig context (overrides current-context)")
5862
cmd.Flags().StringSliceVarP(&resources, "resource", "r", []string{}, "Resources to analyze (e.g., deployment/nginx, pod/nginx-xxx)")
5963
cmd.Flags().BoolVar(&allResources, "all", false, "Analyze all resources in the namespace")
6064
cmd.Flags().StringVarP(&outputFormat, "output", "o", "human", "Output format (human, json, yaml)")
6165
cmd.Flags().BoolVarP(&verbose, "verbose", "v", false, "Verbose output")
66+
cmd.Flags().StringVar(&llmProvider, "provider", "", "LLM provider (claude, openai). Defaults to auto-detect from env")
67+
cmd.Flags().StringVar(&llmModel, "model", "", "LLM model to use (overrides default)")
6268

6369
return cmd
6470
}
6571

6672
func runDebug(cmd *cobra.Command, args []string) error {
6773
problem := args[0]
6874

69-
// Validate that we have API key
70-
apiKey := os.Getenv("ANTHROPIC_API_KEY")
71-
if apiKey == "" {
72-
return fmt.Errorf("ANTHROPIC_API_KEY environment variable not set")
73-
}
74-
7575
// Validate inputs
7676
if !allResources && len(resources) == 0 {
7777
return fmt.Errorf("either specify resources with -r or use --all flag")
@@ -87,9 +87,9 @@ func runDebug(cmd *cobra.Command, args []string) error {
8787

8888
// Expand home symbol in kubeconfig if needed
8989
if strings.HasPrefix(kubeconfig, "~/") {
90-
if homeDir, err := os.UserHomeDir(); err == nil {
91-
kubeconfig = filepath.Join(homeDir, kubeconfig[2:])
92-
}
90+
if homeDir, err := os.UserHomeDir(); err == nil {
91+
kubeconfig = filepath.Join(homeDir, kubeconfig[2:])
92+
}
9393
}
9494

9595
// Initialize K8s client
@@ -113,10 +113,27 @@ func runDebug(cmd *cobra.Command, args []string) error {
113113
s.Stop()
114114
printSuccess(fmt.Sprintf("Gathered %d resources", len(resourcesData)))
115115

116+
s.Suffix = " Initializing AI client..."
117+
s.Start()
118+
119+
// Initialize LLM client using factory
120+
llmClient, err := llm.CreateFromEnv(llmProvider, llmModel)
121+
if err != nil {
122+
s.Stop()
123+
return fmt.Errorf("failed to initialize LLM client: %w", err)
124+
}
125+
126+
s.Stop()
127+
printSuccess("AI client initialized")
128+
129+
// Show LLM provider and model info
130+
printLLMInfo(llmClient)
131+
fmt.Println()
132+
116133
s.Suffix = " Analyzing with AI..."
117134
s.Start()
118135

119-
aiAnalyzer := analyzer.New(apiKey)
136+
aiAnalyzer := analyzer.NewWithLLM(llmClient)
120137
analysis, err := aiAnalyzer.Analyze(problem, resourcesData)
121138
if err != nil {
122139
s.Stop()
@@ -146,6 +163,24 @@ func printHeader(problem string) {
146163
fmt.Println()
147164
}
148165

166+
func printLLMInfo(llmClient llm.LLM) {
167+
// Get provider and model info from the LLM client
168+
provider := "unknown"
169+
model := "unknown"
170+
171+
// Type assertion to get provider and model information
172+
switch client := llmClient.(type) {
173+
case *llm.Claude:
174+
provider = "claude"
175+
model = client.GetModel()
176+
case *llm.OpenAI:
177+
provider = "openai"
178+
model = client.GetModel()
179+
}
180+
181+
fmt.Printf("🤖 LLM Provider: %s (%s)\n", provider, model)
182+
}
183+
149184
func printSuccess(msg string) {
150185
green := color.New(color.FgGreen)
151186
green.Printf("✓ %s\n", msg)
@@ -154,4 +189,4 @@ func printSuccess(msg string) {
154189
func printError(msg string) {
155190
red := color.New(color.FgRed)
156191
red.Printf("✗ %s\n", msg)
157-
}
192+
}

krew-manifest.yaml

Lines changed: 10 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,18 +3,19 @@ kind: Plugin
33
metadata:
44
name: ai
55
spec:
6-
version: v0.1.2
6+
version: v0.1.3
77
homepage: https://github.com/helmcode/kubectl-ai
88
shortDescription: AI-powered Kubernetes debugging
99
description: |
10-
This plugin uses AI (Claude) to analyze Kubernetes resources and help debug
10+
This plugin uses AI (Claude/OpenAI) to analyze Kubernetes resources and help debug
1111
configuration issues, performance problems, and provide actionable recommendations.
1212
1313
Features:
1414
- Analyze any Kubernetes resource (native or CRD)
1515
- Get root cause analysis for issues
1616
- Receive actionable fix commands
1717
- Support for multiple output formats (human, json, yaml)
18+
- Multiple LLM providers (Claude, OpenAI)
1819
1920
Examples:
2021
# Debug a crashing deployment
@@ -26,12 +27,16 @@ spec:
2627
# Debug all resources in a namespace
2728
kubectl ai debug "high memory usage" -n production --all
2829
caveats: |
29-
This plugin requires an Anthropic API key to function.
30+
This plugin requires an API key from either Anthropic or OpenAI to function.
3031
31-
Before using, set your API key:
32+
For Claude (default):
3233
export ANTHROPIC_API_KEY="your-api-key"
34+
Get your API key at: https://console.anthropic.com/
3335
34-
Get your API key at: https://console.anthropic.com/
36+
For OpenAI:
37+
export OPENAI_API_KEY="your-api-key"
38+
export LLM_PROVIDER="openai" # Optional: auto-detects from API key
39+
Get your API key at: https://platform.openai.com/
3540
platforms:
3641
- selector:
3742
matchLabels:

pkg/analyzer/analyzer.go

Lines changed: 26 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,39 @@
11
package analyzer
22

33
import (
4-
"fmt"
4+
"fmt"
55

6-
"github.com/helmcode/kubectl-ai/pkg/llm"
7-
"github.com/helmcode/kubectl-ai/pkg/parser"
8-
"github.com/helmcode/kubectl-ai/pkg/prompts"
9-
"github.com/helmcode/kubectl-ai/pkg/model"
6+
"github.com/helmcode/kubectl-ai/pkg/llm"
7+
"github.com/helmcode/kubectl-ai/pkg/model"
8+
"github.com/helmcode/kubectl-ai/pkg/parser"
9+
"github.com/helmcode/kubectl-ai/pkg/prompts"
1010
)
1111

1212
type Analyzer struct {
13-
llm llm.LLM
13+
llm llm.LLM
1414
}
1515

1616
func New(apiKey string) *Analyzer {
17-
return &Analyzer{llm: llm.NewClaude(apiKey)}
17+
// For backward compatibility, default to Claude
18+
return &Analyzer{llm: llm.NewClaude(apiKey)}
19+
}
20+
21+
func NewWithProvider(provider llm.Provider, config map[string]string) (*Analyzer, error) {
22+
factory := llm.NewFactory()
23+
llmInstance, err := factory.CreateLLM(provider, config)
24+
if err != nil {
25+
return nil, err
26+
}
27+
return &Analyzer{llm: llmInstance}, nil
28+
}
29+
30+
func NewFromEnv() (*Analyzer, error) {
31+
factory := llm.NewFactory()
32+
llmInstance, err := factory.CreateFromEnv()
33+
if err != nil {
34+
return nil, err
35+
}
36+
return &Analyzer{llm: llmInstance}, nil
1837
}
1938

2039
func NewWithLLM(l llm.LLM) *Analyzer {

0 commit comments

Comments
 (0)