Skip to content

Conversation

@brainless
Copy link
Owner

Summary

This PR adds support for local LLM providers to the nocodo-llm-sdk, enabling developers to use locally-hosted models through Ollama and llama.cpp. This expands the SDK's capabilities beyond cloud-based providers, offering privacy, cost savings, and offline functionality.

Key Additions

  • Ollama Provider: Full integration with Ollama's /api/chat endpoint

    • Support for local models (Llama 3.1, Ministral, etc.)
    • Tool/function calling support
    • System prompt handling
    • Configurable base URL (defaults to http://localhost:11434)
    • Debug logging for requests/responses via NOCODO_LLM_LOG_PAYLOADS
  • llama.cpp Provider: OpenAI-compatible API integration

    • Supports llama.cpp server mode
    • Function calling capabilities
    • Configurable base URL (defaults to http://localhost:8080)
  • Documentation: Added comprehensive README examples and external API documentation

Bug Fixes (Ollama)

  • Fixed system prompt being ignored in requests
  • Fixed streaming issues by explicitly setting stream=false
  • Removed $schema from tool parameters (causes Ollama validation errors)
  • Added debug logging for troubleshooting

Files Changed

  • 16 files changed, 2,062 insertions
  • New modules: ollama/ and llama_cpp/ with builder, client, types, and tools
  • Updated models and providers list
  • Added external API documentation for reference

Test Plan

  • Test Ollama integration with a local Llama model
  • Test llama.cpp integration with llama-server
  • Verify function calling works with both providers
  • Test system prompts are correctly handled
  • Verify debug logging with NOCODO_LLM_LOG_PAYLOADS=1
  • Confirm default URLs work for both providers
  • Test custom base URLs for both providers

🤖 Generated with Claude Code

brainless and others added 2 commits February 9, 2026 17:43
- Fix: Add system prompt handling (was being ignored)
- Fix: Set stream=false explicitly to prevent streaming issues
- Fix: Strip $schema from tool parameters (causes Ollama errors)
- Add: Debug logging for requests/responses (NOCODO_LLM_LOG_PAYLOADS)
- Add: Ministral 3 3B model constant

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@github-actions
Copy link

📊 Code Complexity Analysis

  • Lines added: 2062
  • Lines removed: 1
  • Net change: 2061

💡 Suggestion: This is a large PR with 2062 added lines. Consider:

  • Breaking it into smaller, focused PRs
  • Adding comprehensive tests for the new functionality
  • Updating documentation as needed

Automated analysis by GitHub Actions

@github-actions
Copy link

🤖 Automated Code Review Summary

This automated review was generated to help ensure code quality and security standards.

Rust Code Analysis

  • ⚠️ Code formatting: Some Rust files are not formatted according to rustfmt standards.

    • Run cargo fmt to fix formatting issues.
  • ⚠️ Linting: Clippy found potential issues in Rust code.

    • Run cargo clippy --workspace --all-targets -- --deny warnings to see detailed warnings.

Security Analysis

  • ⚠️ Potential secrets: Found references to passwords, secrets, or tokens.

    • Please verify no hardcoded credentials are present.
  • ℹ️ Debug output: Found debug print statements.

    • Consider removing or replacing with proper logging.

Recommendations

  • Run the full CI pipeline to ensure all tests pass
  • Consider adding tests for any new functionality
  • Update documentation if API changes are involved
  • Follow the development workflow described in CLAUDE.md

This review was automatically generated. Please address any issues before merging.

@brainless brainless merged commit 41792cf into main Feb 10, 2026
4 checks passed
@brainless brainless deleted the feature/local-llm-providers-in-llm-sdk branch February 10, 2026 10:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant