Skip to content

refactor(llm): extract LLM client from main loop to improve modularity#470

Open
Lochit-Vinay wants to merge 33 commits intofireform-core:mainfrom
Lochit-Vinay:refactor/llm-client-extraction
Open

refactor(llm): extract LLM client from main loop to improve modularity#470
Lochit-Vinay wants to merge 33 commits intofireform-core:mainfrom
Lochit-Vinay:refactor/llm-client-extraction

Conversation

@Lochit-Vinay
Copy link
Copy Markdown

@Lochit-Vinay Lochit-Vinay commented Apr 19, 2026

Closes #468

🧩 Extract LLM client from main loop

Summary

This PR extracts the LLM API interaction logic from LLM.main_loop() into a separate module (llm_client.py).

The goal is to reduce tight coupling in the LLM pipeline and make future changes (validation, fallback handling, prompt improvements) easier to implement independently.


Changes

  • Moved Ollama API call logic into src/llm_client.py
  • Replaced inline request logic in llm.py with a function call
  • No changes to existing behavior

Why this matters

Currently, multiple responsibilities (prompting, API calls, parsing, validation) are tightly coupled in the same flow.

This extraction:

  • improves separation of concerns
  • reduces overlap between feature PRs
  • makes the pipeline easier to extend

Scope

  • Refactor only (no functional changes)
  • Existing behavior is preserved

Context

Part of addressing: #468

This is the first step in a series of small refactors to modularize the LLM pipeline.

Lochit-Vinay and others added 30 commits March 30, 2026 00:58
Replace generic Firmware/Hardware/SDK fields with Python version,
Docker/Compose version, and OS — relevant to this project.

Fixes fireform-core#458
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[ARCH] Reduce tight coupling in LLM pipeline to avoid PR conflicts

4 participants