Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
71 changes: 65 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Whether it’s a compilation error, test failure, or deployment hiccup, this plu

* **One-click error analysis** on any console output
* **Pipeline-ready** with a simple `explainError()` step
* **AI-powered explanations** via OpenAI GPT models, Google Gemini or local Ollama models
* **AI-powered explanations** via OpenAI GPT models, Google Gemini, local Ollama models and more
* **Folder-level configuration** so teams can use project-specific settings
* **Smart provider management** — LangChain4j handles most providers automatically
* **Customizable**: set provider, model, API endpoint (enterprise-ready)[^1], log filters, and more
Expand Down Expand Up @@ -74,9 +74,9 @@ Whether it’s a compilation error, test failure, or deployment hiccup, this plu
| Setting | Description | Default |
|---------|-------------|---------|
| **Enable AI Error Explanation** | Toggle plugin functionality | ✅ Enabled |
| **AI Provider** | Choose between OpenAI, Google Gemini, or Ollama | `OpenAI` |
| **API Key** | Your AI provider API key | Get from [OpenAI](https://platform.openai.com/settings) or [Google AI Studio](https://aistudio.google.com/app/apikey) |
| **API URL** | AI service endpoint | **Leave empty** for official APIs (OpenAI, Gemini). **Specify custom URL** for OpenAI-compatible services and air-gapped environments. |
| **AI Provider** | Choose between OpenAI, Google Gemini, Anthropic Claude, Azure OpenAI, DeepSeek, or Ollama | `OpenAI` |
| **API Key** | Your AI provider API key | Get from provider's platform |
| **API URL** | AI service endpoint | **Leave empty** for official APIs (OpenAI, Gemini, Anthropic). **Required** for Azure OpenAI endpoint and Ollama. **Optional** for custom/self-hosted services. |
| **AI Model** | Model to use for analysis | *Required*. Specify the model name offered by your selected AI provider |
| **Custom Context** | Additional instructions or context for the AI (e.g., KB article links, organization-specific troubleshooting steps) | *Optional*. Can be overridden at the job level. |

Expand Down Expand Up @@ -148,22 +148,81 @@ This allows you to manage the plugin configuration alongside your other Jenkins
## Supported AI Providers

### OpenAI
- **Models**: `gpt-4`, `gpt-4-turbo`, `gpt-3.5-turbo`, etc.
- **Models**: `gpt-5`, `gpt-5-mini`, `gpt-4.1`, `gpt-4-turbo`, `gpt-3.5-turbo`, etc.
- **API Key**: Get from [OpenAI Platform](https://platform.openai.com/settings)
- **Endpoint**: Leave empty for official OpenAI API, or specify custom URL for OpenAI-compatible services
- **Best for**: Comprehensive error analysis with excellent reasoning

### Google Gemini
- **Models**: `gemini-2.0-flash`, `gemini-2.0-flash-lite`, `gemini-2.5-flash`, etc.
- **API Key**: Get from [Google AI Studio](https://aistudio.google.com/app/apikey)
- **Endpoint**: Leave empty for official Google AI API, or specify custom URL for Gemini-compatible services
- **Endpoint**: Leave empty for official Google AI API
- **Best for**: Fast, efficient analysis with competitive quality

### Anthropic Claude
- **Models**: `claude-3-7-sonnet`, `claude-3-5-sonnet`, `claude-3-5-haiku`, `claude-3-opus`, etc.
- **API Key**: Get from [Anthropic Console](https://console.anthropic.com/)
- **Endpoint**: Leave empty for official Anthropic API
- **Best for**: Detailed, thorough error analysis with high accuracy
- **Configuration Example**:
```yaml
unclassified:
explainError:
aiProvider:
anthropic:
apiKey: "${AI_API_KEY}"
model: "claude-3-5-sonnet-20241022"
enableExplanation: true
```

### Azure OpenAI
- **Models**: Your Azure deployment names (e.g., `gpt-5`, `gpt-4.1`)
- **API Key**: Get from Azure Portal
- **Endpoint**: **Required** - Your Azure OpenAI endpoint (e.g., `https://your-resource.openai.azure.com`)
- **Best for**: Enterprise environments requiring Azure integration and compliance
- **Configuration Example**:
```yaml
unclassified:
explainError:
aiProvider:
azureOpenai:
apiKey: "${AZURE_API_KEY}"
url: "https://your-resource.openai.azure.com"
model: "your-deployment-name"
enableExplanation: true
```

### DeepSeek
- **Models**: `deepseek-chat`, `deepseek-coder`, `deepseek-reasoner`
- **API Key**: Get from [DeepSeek Platform](https://platform.deepseek.com/)
- **Endpoint**: Leave empty for official DeepSeek API (default: https://api.deepseek.com)
- **Best for**: Cost-effective analysis, competitive performance at lower costs
- **Configuration Example**:
```yaml
unclassified:
explainError:
aiProvider:
deepseek:
apiKey: "${AI_API_KEY}"
model: "deepseek-chat"
enableExplanation: true
```

### Ollama (Local/Private LLM)
- **Models**: `gemma3:1b`, `gpt-oss`, `deepseek-r1`, and any model available in your Ollama instance
- **API Key**: Not required by default (unless your Ollama server is secured)
- **Endpoint**: `http://localhost:11434` (or your Ollama server URL)
- **Best for**: Private, local, or open-source LLMs; no external API usage or cost
- **Configuration Example**:
```yaml
unclassified:
explainError:
aiProvider:
ollama:
model: "gemma3:1b"
url: "http://localhost:11434"
enableExplanation: true
```

## Usage

Expand Down
42 changes: 42 additions & 0 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,14 @@
<type>pom</type>
<scope>import</scope>
</dependency>
<!-- Force consistent Netty version to avoid conflicts with Azure OpenAI -->
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-bom</artifactId>
<version>4.1.128.Final</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

Expand Down Expand Up @@ -208,5 +216,39 @@
</exclusions>
</dependency>

<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-anthropic</artifactId>
<version>${langchain4j.version}</version>
<optional>true</optional>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>*</artifactId>
</exclusion>
<exclusion>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>*</artifactId>
</exclusion>
</exclusions>
</dependency>

<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-azure-open-ai</artifactId>
<version>${langchain4j.version}</version>
<optional>true</optional>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>*</artifactId>
</exclusion>
<exclusion>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>*</artifactId>
</exclusion>
</exclusions>
</dependency>

</dependencies>
</project>
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
package io.jenkins.plugins.explain_error.provider;

import dev.langchain4j.model.anthropic.AnthropicChatModel;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.chat.request.ResponseFormat;
import dev.langchain4j.service.AiServices;
import edu.umd.cs.findbugs.annotations.CheckForNull;
import edu.umd.cs.findbugs.annotations.NonNull;
import hudson.Extension;
import hudson.Util;
import hudson.model.AutoCompletionCandidates;
import hudson.model.TaskListener;
import hudson.util.FormValidation;
import hudson.util.Secret;
import io.jenkins.plugins.explain_error.ExplanationException;
import java.util.logging.Level;
import java.util.logging.Logger;
import jenkins.model.Jenkins;
import org.jenkinsci.Symbol;
import org.kohsuke.stapler.DataBoundConstructor;
import org.kohsuke.stapler.QueryParameter;
import org.kohsuke.stapler.verb.GET;
import org.kohsuke.stapler.verb.POST;

public class AnthropicProvider extends BaseAIProvider {

private static final Logger LOGGER = Logger.getLogger(AnthropicProvider.class.getName());

protected Secret apiKey;

@DataBoundConstructor
public AnthropicProvider(String url, String model, Secret apiKey) {
super(url, model);
this.apiKey = apiKey;
}

public Secret getApiKey() {
return apiKey;
}

@Override
public Assistant createAssistant() {
ChatModel model = AnthropicChatModel.builder()
.baseUrl(Util.fixEmptyAndTrim(getUrl())) // Will use default if null
.apiKey(getApiKey().getPlainText())
.modelName(getModel())
.temperature(0.3)
.logRequests(LOGGER.isLoggable(Level.FINE))
.logResponses(LOGGER.isLoggable(Level.FINE))
.build();

return AiServices.create(Assistant.class, model);

Check warning on line 52 in src/main/java/io/jenkins/plugins/explain_error/provider/AnthropicProvider.java

View check run for this annotation

ci.jenkins.io / Code Coverage

Not covered lines

Lines 43-52 are not covered by tests
}

@Override
public boolean isNotValid(@CheckForNull TaskListener listener) {
if (listener != null) {

Check warning on line 57 in src/main/java/io/jenkins/plugins/explain_error/provider/AnthropicProvider.java

View check run for this annotation

ci.jenkins.io / Code Coverage

Partially covered line

Line 57 is only partially covered, one branch is missing
if (Util.fixEmptyAndTrim(Secret.toString(getApiKey())) == null) {
listener.getLogger().println("No Api key configured for Anthropic.");
}
if (Util.fixEmptyAndTrim(getModel()) == null) {
listener.getLogger().println("No Model configured for Anthropic.");

Check warning on line 62 in src/main/java/io/jenkins/plugins/explain_error/provider/AnthropicProvider.java

View check run for this annotation

ci.jenkins.io / Code Coverage

Not covered lines

Lines 58-62 are not covered by tests
}
}
return Util.fixEmptyAndTrim(Secret.toString(getApiKey())) == null ||
Util.fixEmptyAndTrim(getModel()) == null;

Check warning on line 66 in src/main/java/io/jenkins/plugins/explain_error/provider/AnthropicProvider.java

View check run for this annotation

ci.jenkins.io / Code Coverage

Partially covered line

Line 66 is only partially covered, one branch is missing
}

@Extension
@Symbol("anthropic")
public static class DescriptorImpl extends BaseProviderDescriptor {

private static final String[] MODELS = new String[]{
"claude-3-7-sonnet-20250219",
"claude-3-5-sonnet-20241022",
"claude-3-5-haiku-20241022",
"claude-3-opus-20240229",
"claude-3-sonnet-20240229",
"claude-3-haiku-20240307"
};

@NonNull
@Override
public String getDisplayName() {
return "Anthropic (Claude)";
}

@Override
public String getDefaultModel() {
return "claude-3-5-sonnet-20241022";
}

@GET
@SuppressWarnings("lgtm[jenkins/no-permission-check]")
public AutoCompletionCandidates doAutoCompleteModel(@QueryParameter String value) {
AutoCompletionCandidates c = new AutoCompletionCandidates();
for (String model : MODELS) {
if (model.toLowerCase().startsWith(value.toLowerCase())) {
c.add(model);
}
}
return c;
}

/**
* Method to test the AI API configuration.
* This is called when the "Test Configuration" button is clicked.
*/
@POST
public FormValidation doTestConfiguration(@QueryParameter("apiKey") Secret apiKey,
@QueryParameter("url") String url,
@QueryParameter("model") String model) throws ExplanationException {
Jenkins.get().checkPermission(Jenkins.ADMINISTER);

AnthropicProvider provider = new AnthropicProvider(url, model, apiKey);
try {
provider.explainError("Send 'Configuration test successful' to me.", null);
return FormValidation.ok("Configuration test successful! API connection is working properly.");
} catch (ExplanationException e) {
return FormValidation.error("Configuration test failed: " + e.getMessage(), e);

Check warning on line 120 in src/main/java/io/jenkins/plugins/explain_error/provider/AnthropicProvider.java

View check run for this annotation

ci.jenkins.io / Code Coverage

Not covered lines

Lines 90-120 are not covered by tests
}
}

}

}
Loading
Loading