Skip to content

Multiple LLM Provider support for Checkpoint AI Summarization #394

@ishaan812

Description

@ishaan812

Problem or use case

Right now, checkpoint summaries can only be generated using Claude. If your team uses Gemini, OpenAI, Ollama or another AI provider, you can’t use those for checkpoint summaries, even though Entire already supports multiple coding agents. That limits adoption for teams who can’t or don’t want to depend on Claude.

Desired behavior

Users should be able to choose which AI provider generates checkpoint summaries, in line with their existing tools and preferences. The experience would stay the same (intent, outcome, learnings, friction, open items), with the choice of provider handled in settings. Summaries should still fail without blocking commits when something goes wrong.

Proposed solution

Add a provider selection option for summarization in settings, similar to how other options are configured. Keep the current Claude behavior as the default when nothing is set, so existing setups keep working without changes. Start with one additional provider (e.g. an API-based option), then expand support later based on feedback.

Also add an entire model command which allows user to configure their own LLM with api key/other configurations. Can allow all OpenAI compatible API's for now since thats the industry standard.

Alternatives or workarounds

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions