Skip to content

feat(ai-proxy): add Anthropic LLM provider support#1435

Open
Scra3 wants to merge 24 commits intomainfrom
feat/prd-87-anthropic-support
Open

feat(ai-proxy): add Anthropic LLM provider support#1435
Scra3 wants to merge 24 commits intomainfrom
feat/prd-87-anthropic-support

Conversation

@Scra3
Copy link
Member

@Scra3 Scra3 commented Jan 23, 2026

Summary

  • Add Anthropic provider support to @forestadmin/ai-proxy using @langchain/anthropic
  • Support all Claude models with tool calling capabilities
  • Maintain OpenAI-compatible response format for seamless integration
  • Unified configuration interface: both providers use same base fields (name, provider, model, apiKey)

Changes

  • Added @langchain/anthropic dependency
  • Added AnthropicConfiguration type with all Anthropic-specific options
  • Added ANTHROPIC_MODELS constant with supported Claude models
  • Implemented message conversion (OpenAI ↔ LangChain)
  • Implemented response conversion (LangChain → OpenAI format)
  • Added AnthropicUnprocessableError for error handling
  • Added comprehensive tests (38 new tests)

Test plan

  • All 86 tests pass
  • Build succeeds
  • Lint passes (only warnings for non-null assertions)

🤖 Generated with Claude Code

@linear
Copy link

linear bot commented Jan 23, 2026

@qltysh
Copy link

qltysh bot commented Jan 23, 2026

2 new issues

Tool Category Rule Count
qlty Duplication Found 15 lines of similar code in 2 locations (mass = 68) 1
qlty Structure Function with many returns (count = 6): convertToolChoiceToLangChain 1

@qltysh
Copy link

qltysh bot commented Jan 23, 2026

Qlty

Coverage Impact

This PR will not change total coverage.

Modified Files with Diff Coverage (3)

RatingFile% DiffUncovered Line #s
Coverage rating: A Coverage rating: A
packages/ai-proxy/src/provider-dispatcher.ts96.1%226, 259
Coverage rating: B Coverage rating: B
packages/ai-proxy/src/errors.ts100.0%
New file Coverage rating: A
packages/ai-proxy/src/provider.ts100.0%
Total96.4%
🤖 Increase coverage with AI coding...

In the `feat/prd-87-anthropic-support` branch, add test coverage for this new code:

- `packages/ai-proxy/src/provider-dispatcher.ts` -- Lines 226 and 259

🚦 See full report on Qlty Cloud »

🛟 Help
  • Diff Coverage: Coverage for added or modified lines of code (excludes deleted files). Learn more.

  • Total Coverage: Coverage for the whole repository, calculated as the sum of all File Coverage. Learn more.

  • File Coverage: Covered Lines divided by Covered Lines plus Missed Lines. (Excludes non-executable lines including blank lines and comments.)

    • Indirect Changes: Changes to File Coverage for files that were not modified in this PR. Learn more.

tool_calls: msg.tool_calls.map(tc => ({
id: tc.id,
name: tc.function.name,
args: JSON.parse(tc.function.arguments),
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

JSON.parse() can throw if tc.function.arguments contains malformed JSON. This call is outside the try-catch block (line 210), so errors would propagate as unhandled SyntaxError instead of being wrapped in AnthropicUnprocessableError.

OpenAI API can return malformed JSON in edge cases (documented issues).

Fix packages/ai-proxy/src/provider-dispatcher.ts:259: Wrap JSON.parse in try-catch or move convertMessagesToLangChain call inside the existing try-catch block at line 210

});
}

return new AIMessage(msg.content);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LangChain message constructors (SystemMessage, HumanMessage, AIMessage) fail with null content. OpenAI API allows content: null for assistant messages (API spec).

The OpenAIMessage interface at line 117 defines content: string, but should allow null. Line 255 handles this correctly with msg.content || '', but lines 249, 251, 264, 267, 271 pass content directly.

Suggested change
return new AIMessage(msg.content);
return new AIMessage(msg.content || '\);

@Scra3 Scra3 force-pushed the feat/prd-87-anthropic-support branch from 482f6b8 to 0fe24b9 Compare February 4, 2026 15:13
@Scra3 Scra3 changed the base branch from main to feat/ai-proxy-zod-validation February 4, 2026 15:21
@Scra3 Scra3 force-pushed the feat/prd-87-anthropic-support branch 2 times, most recently from e121d6a to 4fa181e Compare February 4, 2026 15:36
@Scra3 Scra3 force-pushed the feat/ai-proxy-zod-validation branch 2 times, most recently from f938960 to 503736c Compare February 5, 2026 11:15
Base automatically changed from feat/ai-proxy-zod-validation to main February 5, 2026 14:03
@Scra3
Copy link
Member Author

Scra3 commented Feb 5, 2026

@Scra3 Scra3 force-pushed the feat/prd-87-anthropic-support branch 3 times, most recently from abebc85 to 57938d6 Compare February 5, 2026 15:29
alban bertolini and others added 14 commits February 6, 2026 18:15
BREAKING CHANGE: isModelSupportingTools is no longer exported from ai-proxy

- Add AIModelNotSupportedError for descriptive error messages
- Move model validation from agent.addAi() to Router constructor
- Make isModelSupportingTools internal (not exported from index)
- Error is thrown immediately at Router init if model doesn't support tools

This is a bug fix: validation should happen at proxy initialization,
not at the agent level. This ensures consistent behavior regardless
of how the Router is instantiated.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add comprehensive integration tests that run against real OpenAI API:

- ai-query route: simple chat, tool calls, tool_choice, parallel_tool_calls
- remote-tools route: listing tools (empty, brave search, MCP tools)
- invoke-remote-tool route: error handling
- MCP server integration: calculator tools with add/multiply
- Error handling: validation errors

Also adds:
- .env-test support for credentials (via dotenv)
- .env-test.example template for developers
- Jest setup to load environment variables

Run with: yarn workspace @forestadmin/ai-proxy test openai.integration

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…adiness

- Add multi-turn conversation test with tool results
- Add AINotConfiguredError test for missing AI config
- Add MCP error handling tests (unreachable server, auth failure)
- Skip flaky tests due to Langchain retry behavior
- Ensure tests work on main branch without Zod validation

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Set maxRetries: 0 by default when creating ChatOpenAI instance.
This makes our library a simple passthrough without automatic retries,
giving users full control over retry behavior.

Also enables previously skipped integration tests that were flaky
due to retry delays.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Addresses all issues identified in PR review:

- Add invoke-remote-tool success tests for MCP tools (add, multiply)
- Strengthen weak error assertions with proper regex patterns
- Fix 'select AI configuration by name' test to verify no fallback warning
- Add test for fallback behavior when config not found
- Add logger verification in MCP error handling tests

Tests now verify:
- Error messages match expected patterns (not just toThrow())
- Logger is called with correct level and message on errors
- Config selection works without silent fallback

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The test was incorrectly checking for finish_reason: 'tool_calls'.
When forcing a specific function via tool_choice, OpenAI returns
finish_reason: 'stop' but still includes the tool_calls array.

The correct assertion is to verify the tool_calls array contains
the expected function name, not the finish_reason.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Move the agent-specific message "Please call addAi() on your agent"
from ai-proxy to the agent package where it belongs.

- ai-proxy: AINotConfiguredError now uses generic "AI is not configured"
- agent: Catches AINotConfiguredError and adds agent-specific guidance

This keeps ai-proxy decoupled from agent-specific terminology.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The test was not properly waiting for the HTTP server to close.
Changed afterAll to use a Promise wrapper around server.close() callback.

This removes the need for forceExit: true in Jest config.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Tests typically complete in 200-1600ms. 30 second timeouts were excessive.
- Single API calls: 10s timeout (was 30s)
- Multi-turn conversation: 15s timeout (was 60s)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add support for Anthropic's Claude models in the ai-proxy package using
@langchain/anthropic. This allows users to configure Claude as their AI
provider alongside OpenAI.

Changes:
- Add @langchain/anthropic dependency
- Add ANTHROPIC_MODELS constant with supported Claude models
- Add AnthropicConfiguration type and AnthropicModel type
- Add AnthropicUnprocessableError for Anthropic-specific errors
- Implement message conversion from OpenAI format to LangChain format
- Implement response conversion from LangChain format back to OpenAI format
- Add tool binding support for Anthropic with tool_choice conversion
- Add comprehensive tests for Anthropic provider

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
… dispatcher

- Move convertMessagesToLangChain inside try-catch to properly handle JSON.parse errors
- Update OpenAIMessage interface to allow null content (per OpenAI API spec)
- Add null content handling for all message types with fallback to empty string

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Mirror OpenAI integration tests for Anthropic provider.
Requires ANTHROPIC_API_KEY environment variable.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Merge OpenAI and Anthropic integration tests into a single file
using describe.each to run the same tests against both providers.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
alban bertolini and others added 7 commits February 6, 2026 18:18
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…ropic models

Tests tool execution across all supported models with informative
skip messages for deprecated/unavailable models.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…list

- Import AnthropicModel type from @anthropic-ai/sdk for autocomplete
- Allow custom strings with (string & NonNullable<unknown>) pattern
- Remove ANTHROPIC_MODELS constant export (now test-only)
- Add @anthropic-ai/sdk as explicit dependency
- Add ANTHROPIC_API_KEY to env example
- Fix Jest module resolution for @anthropic-ai/sdk submodules

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The function is only used in Router, so it makes sense to keep it there.
This simplifies the provider-dispatcher module and keeps related code together.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
… file

- Create supported-models.test.ts with direct function tests
- Remove duplicate model list tests from router.test.ts
- Simplify index.ts exports using export * pattern

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@Scra3 Scra3 force-pushed the feat/prd-87-anthropic-support branch from 6d0f0b4 to 565361c Compare February 6, 2026 17:39
alban bertolini and others added 3 commits February 6, 2026 18:39
- Remove duplicate isModelSupportingTools in router.ts (use supported-models import)
- Guard validateConfigurations to only apply OpenAI model checks
- Add status-based error handling for Anthropic (429, 401) matching OpenAI pattern
- Move message conversion outside try-catch so input errors propagate directly
- Add explicit validation for tool_call_id on tool messages
- Add JSON.parse error handling with descriptive AIBadRequestError
- Throw on unknown message roles instead of silent HumanMessage fallback
- Use nullish coalescing (??) for usage metadata defaults
- Fix import ordering in integration test
- Align router test model lists with supported-models.ts

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The integration tests need a full rework for Anthropic support.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add Anthropic API integration tests mirroring the existing OpenAI tests:
- Basic chat, tool calls, tool_choice: required, multi-turn conversations
- Error handling for invalid API keys
- Model discovery via anthropic.models.list() with tool support verification

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@Scra3 Scra3 force-pushed the feat/prd-87-anthropic-support branch from cabba2c to 4bd08ce Compare February 7, 2026 10:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant