Skip to content

feat(llm): Implement experimental chat feature#28

Open
rsthornton wants to merge 1 commit intomainfrom
feature/llm-chat
Open

feat(llm): Implement experimental chat feature#28
rsthornton wants to merge 1 commit intomainfrom
feature/llm-chat

Conversation

@rsthornton
Copy link

Description:

This is an experimental, exploratory pull request to establish a foundational architecture for deep LLM integration within BERT. The initial implementation is a proof-of-concept chat interface, designed to prove out the core technical components and provide a launchpad for more advanced, generative features.

This PR is intended for review and discussion and should not be merged into main.

Architectural Overview

The implementation is split into three main parts:

  1. Tauri Backend Service (src-tauri/src/chat_service.rs): A Rust-based service that runs only in the desktop environment.

    • Features an abstracted LLMProvider to handle different backends (Ollama, OpenAI, etc.).
    • Uses Rust feature flags (local-llm, cloud-api) for different compile-time builds.
    • Manages API keys and model context via a thread-safe Mutex.
    • Exposes Tauri commands to the frontend.
  2. Leptos Frontend Component (src/leptos_app/components/chat.rs): A self-contained UI component providing a chat-based user interface.

  3. Tauri Command Bridge (src-tauri/src/lib.rs): Correctly registers the backend commands, making them available to the Leptos frontend.

Vision & Future Possibilities: Generative System Modeling

While the current proof-of-concept is a chat interface, the true power of this foundation is its potential as a generative interface for system modeling. The architecture is designed to be extended to support more complex interactions, such as:

  • Natural Language to JSON: A user could describe a system in plain English (e.g., "Create a simple feedback loop with a stock, an inflow, and an outflow connected to the stock."), and the LLM backend could translate this into a valid BERT JSON structure, which is then loaded into the application.
  • Model Analysis & Explanation: A user could upload an existing JSON file and ask the LLM to explain the system's structure or identify potential areas of interest.
  • Guided Model Creation: The chat could act as an assistant, guiding a new user through the process of building their first model step-by-step.

This branch provides the core building blocks to begin exploring these exciting possibilities.

Key Decisions & Implementation Details

  • Hybrid Environment Handling (Desktop vs. Web): The feature intelligently detects its environment. In the full-featured Tauri desktop app, it uses the live Rust backend. In a standard web browser where the backend is unavailable, it gracefully falls back to a mock chat experience to ensure the UI is always functional and demonstrable. This dual-path approach is critical for a hybrid application like BERT.

Open Questions & Request for Feedback

This prototype raises several questions that would benefit from senior review:

  1. Path to Generative JSON: What would be the best approach to translate chat commands into BERT JSON? Would this involve complex prompt engineering, function calling, or a dedicated parsing layer in the Rust backend?
  2. Configuration & Settings: What is the best long-term approach for managing API keys and model selection for these more advanced tasks?
  3. Web-Version Strategy: For a public-facing web version, what should the LLM strategy be? Should we proxy requests through our own server, or is there a path to client-side generation?

@rsthornton rsthornton requested a review from Jtensminger June 30, 2025 20:43
@rsthornton rsthornton linked an issue Jun 30, 2025 that may be closed by this pull request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Research AI integrations

1 participant