Skip to content

Comments

feat: conversation database#1045

Merged
AAClause merged 23 commits intomasterfrom
convDatabase
Feb 23, 2026
Merged

feat: conversation database#1045
AAClause merged 23 commits intomasterfrom
convDatabase

Conversation

@clementb49
Copy link
Collaborator

@clementb49 clementb49 commented Feb 21, 2026

Summary by CodeRabbit

Release Notes

  • New Features
    • Added conversation persistence with history browser to save, search, and reopen previous conversations
    • Added automatic draft saving while editing prompts with configurable auto-save options
    • Added conversation history dialog with search, pagination, and delete functionality
    • Added privacy toggle and rename capabilities for conversations
    • Added option to reopen last active conversation on startup

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 21, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

This PR introduces SQLite-backed persistence for conversations using SQLAlchemy and Alembic, adding database models, a manager class for full CRUD operations, configuration settings for auto-save and draft behaviors, GUI components for conversation history and draft management, and a comprehensive test suite.

Changes

Cohort / File(s) Summary
Configuration & Dependencies
basilisk/config/main_config.py, pyproject.toml
Adds four new ConversationSettings fields (auto_save_to_db, auto_save_draft, reopen_last_conversation, last_active_conversation_id) and dependencies for SQLAlchemy, Alembic, and associated packages with updated build configuration.
Database Models & Infrastructure
basilisk/conversation/database/models.py, basilisk/conversation/database/manager.py, basilisk/conversation/database/__init__.py
Introduces SQLAlchemy ORM models for conversations, system prompts, message blocks, messages, attachments, citations, and a comprehensive ConversationDatabase manager class with lifecycle management, migrations, and full CRUD operations (save, load, list, delete, draft handling).
Conversation Models
basilisk/conversation/attached_file.py, basilisk/conversation/conversation_model.py
Adds optional db_id fields to AttachmentFile, SystemMessage, and MessageBlock; introduces __eq__ for SystemMessage comparing by role and content.
GUI Components & Integration
basilisk/gui/conversation_history_dialog.py, basilisk/gui/conversation_tab.py, basilisk/gui/main_frame.py, basilisk/gui/preferences_dialog.py
Adds ConversationHistoryDialog for browsing/searching/reopening conversations; extends ConversationTab with draft lifecycle (debounced auto-save, restoration); adds MainFrame handlers for history, privacy toggling, last conversation reopening, and title persistence; extends PreferencesDialog with new auto-save and history options.
Application Initialization
basilisk/main_app.py
Initializes and manages ConversationDatabase instance at application startup and cleanup.
Tests
tests/conftest.py, tests/conversation/conftest.py, tests/conversation/test_*.py, tests/conversation/database/*
Adds comprehensive test fixtures and suites covering database models, manager CRUD operations, round-trip conversions, and draft/attachment handling.

Sequence Diagrams

sequenceDiagram
    participant User as User (GUI)
    participant ConvTab as ConversationTab
    participant ConvDB as ConversationDatabase
    participant SQLite as SQLite DB

    User->>ConvTab: Prompt text changed
    ConvTab->>ConvTab: Debounce timer (DRAFT_SAVE_DELAY_MS)
    ConvTab->>ConvTab: Build draft MessageBlock
    ConvTab->>ConvDB: save_draft_block()
    ConvDB->>SQLite: INSERT/UPDATE draft block
    ConvDB-->>ConvTab: Success

    User->>ConvTab: Send message (streaming complete)
    ConvTab->>ConvTab: Build final MessageBlock
    ConvTab->>ConvDB: save_message_block()
    ConvDB->>SQLite: INSERT message block with attachments & citations
    ConvDB-->>ConvTab: Block ID
Loading
sequenceDiagram
    participant User as User (GUI)
    participant HistDialog as ConversationHistoryDialog
    participant MainFrame as MainFrame
    participant ConvDB as ConversationDatabase
    participant ConvTab as ConversationTab

    User->>HistDialog: Search/Select conversation
    HistDialog->>ConvDB: list_conversations() / load_conversation()
    ConvDB-->>HistDialog: Metadata / Full Conversation
    User->>HistDialog: Click Open
    HistDialog->>MainFrame: Return selected conv_id
    MainFrame->>ConvDB: load_conversation(conv_id)
    ConvDB-->>MainFrame: Reconstructed Conversation
    MainFrame->>ConvTab: open_from_db(conv_id)
    ConvTab-->>MainFrame: Restored tab
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested reviewers

  • AAClause
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: conversation database' clearly and concisely summarizes the main change—adding persistent database support for conversations.
Docstring Coverage ✅ Passed Docstring coverage is 97.47% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch convDatabase

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 26

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.gitignore:
- Line 35: Remove the personal tool config entry from the repository .gitignore
and instead add ".claude/settings.local.json" to your personal/global gitignore
(e.g., via core.excludesFile) so the repo ignores remain team-wide; locate the
".claude/settings.local.json" line in the project ".gitignore" and delete it,
and advise contributors to add that path to their own global gitignore
configuration.

In `@basilisk/conversation/database/manager.py`:
- Around line 254-261: The bare except swallows error details when calling
attachment.read_as_bytes(); change the block to catch the exception as a
variable (e.g., except Exception as exc) and include the exception info in the
log (for example call log.warning("Could not read attachment %s, skipping",
attachment.name, exc_info=True) or include exc in the message) so the
traceback/value is preserved while keeping the same early return; locate the
try/except around attachment.read_as_bytes() in manager.py and update that
handler accordingly.
- Line 66: The docstring for the get_engine method in
basilisk.conversation.database.manager currently reads "Get the sqlalchemy
database engine" and violates ruff D415; update that docstring to include
trailing punctuation (e.g., a period) so it becomes "Get the sqlalchemy database
engine." to satisfy the style check.
- Line 668: Move the local import "from basilisk.consts import BSKC_VERSION" out
of the body and add it to the module-level import block at the top of manager.py
alongside the other imports; remove the deferred import line so the code uses
the top-level BSKC_VERSION symbol everywhere in the file (no functional changes,
just relocate the import to the file header).
- Around line 514-530: In _apply_search_filter, user-provided search contains
unescaped LIKE wildcards (% and _) so update the function to first escape
backslashes, percent and underscore in the raw search string (e.g. replace "\"
-> "\\", "%" -> "\%", "_" -> "\_"), then build search_term =
f"%{escaped_search}%"; use the .like(..., escape='\\') form on both
DBMessage.content.like and DBConversation.title.like (and on the msg_conv_ids
subquery) so the database treats escaped characters literally, keeping the
DBConversation.id.in_(select(msg_conv_ids.c.conversation_id)) logic unchanged.
- Around line 724-749: The created AttachmentFile/ImageFile instances in
_make_attachment are missing the DB primary key, so pass db_att.id into the
constructed objects and set their db_id attribute (e.g., include db_id=db_att.id
when creating ImageFile and AttachmentFile) so that later _save_attachment sees
att_id present and uses the fast-path deduplication instead of the hash-based
path; update the _make_attachment function to accept and propagate db_att.id for
both ImageFile and AttachmentFile creations.

In `@basilisk/conversation/database/models.py`:
- Around line 22-25: The timestamp fields created_at and updated_at use
timezone-naive datetime.now; change their defaults and onupdate to
timezone-aware values (e.g., datetime.now(timezone.utc)) and import timezone
from datetime, and apply the same fix to the other created_at/updated_at
mapped_column declarations found later in this file (the other pair around the
model where those columns are duplicated). Update mapped_column(default=...) and
mapped_column(default=..., onupdate=...) to call datetime.now(timezone.utc) so
all timestamps are stored as UTC-aware datetimes.
- Around line 88-90: The conversation_system_prompt_id foreign key currently has
no ON DELETE action and can cause FK constraint errors during cascaded deletes;
update the mapped_column definition for conversation_system_prompt_id to use
ForeignKey("conversation_system_prompts.id", ondelete="SET NULL") so that when a
referenced conversation_system_prompts row is removed the FK is set to NULL (the
column is nullable), ensuring ORM cascade order won’t raise constraint
violations; update the mapped_column call where conversation_system_prompt_id is
defined to include this ondelete option.

In `@basilisk/gui/conversation_history_dialog.py`:
- Around line 143-146: The _on_item_deselected handler currently unconditionally
disables the Open/Delete buttons and causes flicker during keyboard navigation;
change it to defer the disable logic using wx.CallAfter (or otherwise check
current selection count) so it only disables buttons if no item is actually
selected, and coordinate with the existing _on_item_selected handler to enable
buttons when a selection exists; update _on_item_deselected to call
wx.CallAfter(self._update_buttons_state) (or equivalent) and implement an
_update_buttons_state helper that inspects the control's
GetSelectedItemsCount()/GetSelection() and enables/disables self.open_btn and
self.delete_btn accordingly.
- Around line 195-220: The _refresh_list method currently uses a hardcoded
limit=200 in the call to self.conv_db.list_conversations which silently
truncates results; change this to use a configurable page size (e.g.,
self.page_size or a module-level MAX_PAGE_SIZE) instead of 200, add a
total-count query via a new or existing method on conv_db (e.g.,
self.conv_db.count_conversations(search)) and display a "Showing N of M" label
in the dialog, and wire up simple pagination or a "Load more" action that
increments an offset/page attribute and calls _refresh_list again; update
references in _refresh_list (self.conv_db.list_conversations, the list_ctrl
population, and any new label/button state) to use the page/offset and total
count so users see when results are truncated.

In `@basilisk/gui/conversation_tab.py`:
- Around line 1003-1014: The 2000ms magic number used in _on_prompt_text_changed
should be replaced with a module-level constant; define a descriptive constant
(e.g., DRAFT_SAVE_DELAY_MS = 2000) near the top of the module and use it in the
call to self._draft_timer.StartOnce(DRAFT_SAVE_DELAY_MS) so the delay is named
and easy to adjust.
- Around line 986-1001: The code unconditionally clears self.db_conv_id after
attempting to delete the DB record in set_private, which can orphan a DB row if
delete_conversation fails; change the logic so that self.db_conv_id is only set
to None when _get_conv_db().delete_conversation(self.db_conv_id) succeeds (e.g.,
move the assignment into the try block immediately after the delete call), keep
the _draft_timer.Stop() and the except log.error as-is, and ensure the id
remains unchanged on exception so the tab still references the DB record.
- Around line 118-135: Both open_conversation and open_from_db duplicate the
"pop last draft block if response is None, create tab, then restore via
_restore_draft_block" pattern; extract that into a small shared helper (e.g.,
_pop_draft_if_present or a classmethod like _create_tab_with_optional_draft)
that takes the conversation and the tab constructor arguments, performs the
draft extraction (pops last message where message.response is None), constructs
the Tab via cls(...), calls tab._restore_draft_block(draft_block) if a draft was
popped, and returns the created tab; replace the duplicated blocks in
open_conversation and open_from_db to call this helper and keep use of
_restore_draft_block unchanged.
- Around line 1060-1084: The method _save_draft_to_db reads
prompt_panel.prompt_text and prompt_panel.attachment_files then calls
_build_draft_block(), which re-reads the same state; to eliminate the redundancy
modify _build_draft_block to accept prompt_text and attachments (or add an
optional args tuple) and update _save_draft_to_db to pass the already-fetched
prompt_text and attachments into _build_draft_block (and adjust any callers of
_build_draft_block accordingly), ensuring you still return None when appropriate
and keep exception handling around the DB save/delete calls.
- Around line 75-81: The class-level cache _conv_db causes stale DB references
because _conv_db is set once in _get_conv_db and never invalidated; modify
_get_conv_db to return wx.GetApp().conv_db on each call (remove or skip using
cls._conv_db) or add a companion method (e.g., reset_conv_db or clear_conv_db)
that sets cls._conv_db = None and call it wherever the app swaps conv_db; update
references to _conv_db/_get_conv_db accordingly so the live wx.GetApp().conv_db
is used or can be refreshed after reassignment.
- Around line 1032-1052: The MessageBlock instances at the other sites are
incorrectly passing model_id and provider_id as top-level kwargs; change them to
use the same pattern as in _build_draft_block by constructing AIModelInfo and
passing it via the model parameter (e.g., replace MessageBlock(...,
model_id=..., provider_id=..., ...) with MessageBlock(...,
model=AIModelInfo(provider_id=..., model_id=...), ...)); update the
instantiations around the occurrences noted (the ones near the earlier uses at
the locations corresponding to the mistakes around lines ~620 and ~743) so
Pydantic validation for the MessageBlock.model field succeeds.
- Around line 1053-1058: The equality check for SystemMessage is failing because
only __hash__ compares (role, content) while Pydantic's default __eq__ includes
db_id; update the SystemMessage class to add a custom __eq__ that mirrors the
__hash__ logic: in SystemMessage implement an __eq__(self, other: object) that
returns NotImplemented for non-SystemMessage and otherwise compares self.role ==
other.role and self.content == other.content so that the lookup used in
conversation.systems (from get_system_message and the block.system_index
assignment) correctly finds matching entries regardless of db_id.

In `@basilisk/gui/main_frame.py`:
- Around line 462-478: The code should defensively handle the case where
self.tabs_panels is empty or self.current_tab is an invalid index; before
accessing current = self.current_tab and using current.db_conv_id, check that
self.tabs_panels is non-empty and that 0 <= self.current_tab <
len(self.tabs_panels); if invalid, treat current as None and set
self.conf.conversation.last_active_conversation_id = None, then call
self.conf.save(); this change involves the shutdown path around
self.tabs_panels, self.current_tab, and uses existing symbols like
tab.flush_draft(), tab.completion_handler.stop_completion(),
self.conf.conversation.reopen_last_conversation, and self.conf.save() to ensure
no IndexError occurs when all tabs were closed prior to quitting.

In `@basilisk/gui/preferences_dialog.py`:
- Around line 211-217: The auto_save_draft checkbox should be disabled when
auto_save_to_db is unchecked; add a handler that binds EVT_CHECKBOX for the
auto_save_to_db control (similar to the existing on_resize pattern) that enables
or disables self.auto_save_draft accordingly, set the initial enabled state of
self.auto_save_draft based on self.conf.conversation.auto_save_to_db, and name
the handler clearly (e.g., on_auto_save_to_db_changed) so callers and tests can
find it.

In `@basilisk/main_app.py`:
- Line 117: Fix the docstring that currently reads "Creates a new thread to
handle automatic updates based on the configuration settings. The thread is
started er the background and runs until the application exits or the thread is
stopped." by changing "started er the background" to "started in the background"
in the docstring associated with the automatic update thread (search for the
docstring text "Creates a new thread to handle automatic updates..." or the
function that initializes/starts the automatic update thread) and verify the
updated string is saved so the typo is fully removed.
- Around line 198-208: Fix the typos and add resilience: update the docstring in
init_conversation_db to "at application level" and the docstring in
close_conversation_db to "Close the database connection and release the
singleton", change the log message in close_conversation_db to start with
uppercase ("Closing conversation database") for consistency, and make
init_conversation_db robust by wrapping
ConversationDatabase.get_db_path()/ConversationDatabase(...) in a try/except
that logs the exception and ensures self.conv_db is set to None on failure; in
close_conversation_db, guard the close call by checking that self.conv_db is not
None (or use hasattr(self, "conv_db") and truthiness) before calling
self.conv_db.close() and then log the successful close via log.info.

In `@pyproject.toml`:
- Around line 33-34: The dependency entries "alembic>=1.18,<1.19" and
"sqlalchemy>=2.0,<2.1" are out of alphabetical order; move "alembic" into the
list position after "accessible-output3" and before "anthropic", and move
"sqlalchemy" so it sits between "pyyaml" and "sounddevice" in the dependencies
block of pyproject.toml so the entire list remains alphabetically sorted.

In `@tests/conversation/database/conftest.py`:
- Around line 41-48: The fixture currently constructs ConversationDatabase via
ConversationDatabase.__new__ and manually sets _db_path, _engine, and
_session_factory, which is brittle; add a stable test-friendly factory on
ConversationDatabase (e.g., ConversationDatabase.from_engine or a constructor
arg that accepts an engine) that fully initializes required internal state
instead of bypassing __init__, then update the db_manager fixture to call that
factory; reference ConversationDatabase, the db_manager fixture, and the private
attrs _db_path/_engine/_session_factory so maintainers know which state must be
initialized.

In `@tests/conversation/database/test_manager.py`:
- Around line 204-214: The test test_list_ordered_by_updated is flaky because
the three Conversation instances may get identical updated_at timestamps; modify
the test to set deterministic updated_at values on each Conversation (e.g.,
conv.updated_at = base_time + timedelta(seconds=i)) before calling
db_manager.save_conversation, or alternatively patch datetime.now to return
controlled times; ensure you reference the Conversation instances created in the
test and verify ordering via db_manager.list_conversations so results are
deterministic without relying on sleep.

In `@tests/conversation/database/test_models.py`:
- Around line 250-263: The test test_unique_role_per_block leaves the SQLAlchemy
session in a failed transaction state after the with
pytest.raises(IntegrityError) block; after that block (and similarly in other
IntegrityError tests), call db_session.rollback() to reset the session state so
subsequent operations won't be affected — locate the IntegrityError assertions
around DBMessage creation in test_unique_role_per_block (and the other tests
referenced) and append a db_session.rollback() immediately after each
pytest.raises block.

In `@tests/conversation/database/test_roundtrip.py`:
- Around line 100-141: Enhance test_attachment_dedup_roundtrip to also verify
the restored attachment content is correct: after loaded =
db_manager.load_conversation(conv_id), open the AttachmentFile(s) on
loaded.messages[0].request.attachments[0] and
loaded.messages[1].request.attachments[0] (or use their .location/UPath) and
read/compare the content to the original "shared content"; keep assertions
similar to test_attachment_content_readable and reuse
AttachmentFile/MessageBlock references to locate where to add these reads and
equality checks.

clementb49 and others added 2 commits February 21, 2026 11:48
- gitignore: remove personal .claude/settings.local.json entry (advise
  contributors to add it to their global gitignore instead)
- models: use timezone-aware datetime.now(timezone.utc) for all
  created_at/updated_at column defaults and onupdate callables
- models: add ondelete="SET NULL" to conversation_system_prompt_id FK
  to avoid constraint violations during cascaded deletes
- conversation_model: add SystemMessage.__eq__ that compares only role
  and content (mirrors __hash__), fixing lookup in OrderedSet when db_id
  differs
- manager: add ConversationDatabase.from_engine() factory classmethod
  for test-friendly initialisation without running Alembic migrations
- main_app: fix docstring typo ("started the background" →
  "started in the background"); wrap init_conversation_db in try/except;
  capitalise log messages in close_conversation_db
- main_frame: guard on_close against empty tabs_panels / invalid index
  before calling current_tab
- conversation_tab: remove stale _conv_db class-level cache; always
  return wx.GetApp().conv_db directly; add DRAFT_SAVE_DELAY_MS
  constant; extract _pop_draft_if_present helper; fix set_private to
  only clear db_conv_id on successful delete; make _build_draft_block
  accept optional prompt_text/attachments args; pass pre-fetched values
  from _save_draft_to_db to avoid double-read
- conversation_history_dialog: replace hardcoded limit=200 with
  PAGE_SIZE=100 constant; add offset/reset pagination with Load More
  button; add count_label showing "Showing N of M"; fix
  _on_item_deselected to use wx.CallAfter(_update_buttons_state)
- preferences_dialog: disable auto_save_draft when auto_save_to_db is
  unchecked; add on_auto_save_to_db_changed EVT_CHECKBOX handler
- pyproject.toml: sort alembic and sqlalchemy into correct alphabetical
  position in dependencies
- tests/conftest: use ConversationDatabase.from_engine() instead of
  __new__ + manual attribute injection
- tests/test_manager: fix flaky test_list_ordered_by_updated with
  deterministic updated_at values via direct DB update
- tests/test_models: add db_session.rollback() after every
  pytest.raises(IntegrityError) block
- tests/test_roundtrip: verify restored attachment content matches
  original in test_attachment_dedup_roundtrip

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
basilisk/gui/conversation_tab.py (1)

800-818: 🧹 Nitpick | 🔵 Trivial

Draft block appended/removed during save is clean, but not thread-safe.

The save_conversation method temporarily mutates self.conversation.messages (append on line 803, pop in finally on line 818). If any other code (e.g., a timer callback or UI event) reads self.conversation.messages between the append and pop, it would see an unexpected draft block. In a single-threaded wxPython app this is safe as long as no wx.Yield() or wx.SafeYield() is called during self.conversation.save(file_path). Just noting this as a design consideration.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/gui/conversation_tab.py` around lines 800 - 818, save_conversation
temporarily mutates self.conversation.messages by append/pop which is not
thread-safe; protect this mutation by serializing it with a lock: add/use a
threading.Lock (e.g., self._messages_lock or a lock on the Conversation
instance) and acquire it before appending the draft_block, call
self.conversation.save(file_path) while holding the lock, then pop and release
the lock in finally; alternatively, avoid mutating shared state by building a
local copy (messages_copy = list(self.conversation.messages) + [draft_block])
and modify conversation.save to accept an explicit messages parameter or save
from that copy—reference functions/fields: save_conversation,
_build_draft_block, self.conversation.messages, and conversation.save.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@basilisk/conversation/database/manager.py`:
- Around line 568-606: The subquery currently counts DBMessageBlock rows
(DBMessageBlock) and labels the result "message_count" which is misleading;
update the subquery to count actual DBMessage rows instead by joining DBMessage
to DBMessageBlock (counting DBMessage.id) or alternatively rename the label to
"block_count" if you intentionally want block counts; adjust the select
projection and output keys accordingly (references: DBMessageBlock, DBMessage,
DBConversation, the block_count subquery and the returned "message_count" field
in this query block).
- Around line 83-100: from_engine currently assigns instance._db_path a raw
string (engine.url.database) which can be ":memory:" for SQLite and is
type-inconsistent with the normal __init__ that uses a Path; change from_engine
to coerce the database value into a pathlib.Path (or None when
engine.url.database is None) so _db_path has the same type as in __init__;
update the from_engine method to set instance._db_path =
Path(engine.url.database) if engine.url.database is not None else None and add
the necessary import for Path, keeping the rest of the method (_engine,
_session_factory) unchanged.
- Around line 673-681: When constructing the MessageBlock, avoid calling
AIModelInfo(...) which triggers provider_must_exist validation; instead create
the model info via
AIModelInfo.model_construct(provider_id=db_block.model_provider,
model_id=db_block.model_id) (or wrap the current AIModelInfo(...) call in a
try/except catching ValidationError and falling back to
AIModelInfo.model_construct) so loading conversations from the DB does not fail
when a provider was uninstalled.

In `@basilisk/conversation/database/models.py`:
- Around line 157-173: DBAttachment rows become orphaned when
DBMessageAttachment link rows are removed; add a cleanup routine (e.g.,
cleanup_orphan_attachments) that finds DBAttachment entries with no
DBMessageAttachment references (LEFT OUTER JOIN DBAttachment to
DBMessageAttachment and filter where DBMessageAttachment.id is NULL) and deletes
them safely in batches (or store removed blob_data elsewhere) and wire this
routine to a scheduler/management command (cron, Celery beat, or an admin
endpoint) to run periodically; ensure deletion runs inside a DB transaction and
handles large deletes via batching to avoid long locks.
- Around line 196-212: DBCitation is missing an index on message_id which will
cause slow lookups; update the DBCitation model to add an index on the
message_id column (either set mapped_column(..., index=True) for message_id or
add __table_args__ with Index("ix_citations_message_id", "message_id")) so
queries via the DBMessage.citations relationship use the index; ensure the index
name follows the project convention (e.g., ix_citations_message_id).

In `@basilisk/gui/conversation_history_dialog.py`:
- Around line 132-144: The _bind_events method mixes control.Bind and self.Bind;
change all event bindings to use self.Bind with the control as the source/id
parameter for consistency (e.g., replace search_ctrl.Bind(wx.EVT_TEXT,
self._on_search_text) with self.Bind(wx.EVT_TEXT, self._on_search_text,
self.search_ctrl), list_ctrl.Bind(...) with self.Bind(wx.EVT_LIST_ITEM_SELECTED,
self._on_item_selected, self.list_ctrl) and similarly for
EVT_LIST_ITEM_DESELECTED, EVT_LIST_ITEM_ACTIVATED, EVT_KEY_DOWN, and button
binds), and bind the timer with self.Bind(wx.EVT_TIMER, self._on_search_timer,
self._search_timer); update references in _bind_events to use self.Bind for
controls: search_ctrl, list_ctrl, open_btn, delete_btn, load_more_btn and the
timer _search_timer while keeping handler functions (_on_search_text,
_on_search_timer, _on_item_selected, _on_item_deselected, _on_open,
_on_list_key, _on_delete, _on_load_more) unchanged.

In `@basilisk/gui/conversation_tab.py`:
- Around line 1050-1065: The _build_draft_block currently constructs AIModelInfo
with fallback IDs like "unknown", which triggers AIModelInfo.provider_must_exist
validation when self.current_account or self.current_model are missing; update
_build_draft_block to return None early if either self.current_account or
self.current_model is falsy (do not construct AIModelInfo with fallback values),
and ensure any callers of _build_draft_block handle a None return appropriately.
- Around line 948-977: _auto_save_to_db can persist a None conversation.title on
first save because self.title (tab title) isn't synced; before calling
save_conversation when db_conv_id is None, ensure the model title is populated
by setting self.conversation.title = self.conversation.title or self.title (or
otherwise copy self.title into the conversation) or modify the call to
save_conversation to accept an explicit title argument (pass self.title) so the
DB record gets the user-facing title; update the logic inside _auto_save_to_db
(references: _auto_save_to_db, db_conv_id, save_conversation,
self.conversation.title, self.title) to perform this sync prior to persisting.

In `@tests/conversation/database/conftest.py`:
- Around line 67-94: Update the conversation_with_attachments fixture to guard
the Pillow import using pytest.importorskip so tests clearly skip when PIL isn't
installed: at the start of the fixture call pytest.importorskip("PIL") (or
pytest.importorskip("Pillow")) before using Image, then import Image from PIL
and proceed; ensure the guard is placed in the conversation_with_attachments
fixture (around the current PIL import and Image usage) so the test is skipped
with a clear message instead of raising ImportError.

In `@tests/conversation/database/test_manager.py`:
- Around line 352-362: The test_count_with_search function declares an unused
fixture parameter test_ai_model; remove the unused parameter from the test
signature (change def test_count_with_search(self, db_manager):) or
alternatively reference/use test_ai_model inside the test if it's needed for
setup; update any test runner references if required so the test no longer
receives an unused fixture called test_ai_model.
- Around line 206-227: The test_list_ordered_by_updated function declares an
unused fixture parameter test_ai_model; either remove test_ai_model from the
test signature or actually use it in the test. Locate the
test_list_ordered_by_updated definition and update the parameter list to drop
test_ai_model (or add a reference to test_ai_model inside the test body) so the
test no longer has an unused fixture; keep db_manager and any other fixtures
intact.

---

Outside diff comments:
In `@basilisk/gui/conversation_tab.py`:
- Around line 800-818: save_conversation temporarily mutates
self.conversation.messages by append/pop which is not thread-safe; protect this
mutation by serializing it with a lock: add/use a threading.Lock (e.g.,
self._messages_lock or a lock on the Conversation instance) and acquire it
before appending the draft_block, call self.conversation.save(file_path) while
holding the lock, then pop and release the lock in finally; alternatively, avoid
mutating shared state by building a local copy (messages_copy =
list(self.conversation.messages) + [draft_block]) and modify conversation.save
to accept an explicit messages parameter or save from that copy—reference
functions/fields: save_conversation, _build_draft_block,
self.conversation.messages, and conversation.save.

---

Duplicate comments:
In `@basilisk/conversation/database/manager.py`:
- Around line 64-66: The docstring for the static method get_db_engine is
missing a trailing period; update the docstring in the function
get_db_engine(db_path: Path) -> Engine to end with a period (e.g., change "Get
the sqlalchemy database engine" to "Get the sqlalchemy database engine.") so it
conforms to the ruff D415 rule.

In `@basilisk/gui/conversation_history_dialog.py`:
- Around line 235-243: The error path in _refresh_list causes self._offset to
remain incremented when called from _on_load_more, leading to skipped pages;
modify _refresh_list (or _on_load_more) so the offset is only advanced on a
successful fetch or is reverted on failure: either (a) move the increment of
self._offset (currently done in _on_load_more) to after
conv_db.list_conversations()/get_conversation_count() succeed inside
_refresh_list, or (b) keep the increment where it is but wrap the conv_db calls
in a try/except and decrement self._offset in the except block before returning;
update any related UI/error handling in _refresh_list to ensure consistent
state.

Comment on lines +83 to +100
@classmethod
def from_engine(cls, engine: Engine) -> "ConversationDatabase":
"""Create a ConversationDatabase from an existing engine (no migrations).

This factory is intended for testing where the schema is created
directly via ``Base.metadata.create_all`` rather than Alembic.

Args:
engine: A pre-configured SQLAlchemy engine.

Returns:
A fully initialised ConversationDatabase instance.
"""
instance = cls.__new__(cls)
instance._db_path = engine.url.database
instance._engine = engine
instance._session_factory = sessionmaker(bind=engine)
return instance
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

from_engine: engine.url.database is None for in-memory SQLite.

For sqlite:///:memory: URLs, engine.url.database returns ":memory:" (a string), so this is fine for the test use case. Just noting that _db_path will be a string ":memory:" rather than a Path object, which is a type inconsistency with the normal __init__ path. This won't cause issues since _db_path isn't used after initialization in the from_engine path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/conversation/database/manager.py` around lines 83 - 100, from_engine
currently assigns instance._db_path a raw string (engine.url.database) which can
be ":memory:" for SQLite and is type-inconsistent with the normal __init__ that
uses a Path; change from_engine to coerce the database value into a pathlib.Path
(or None when engine.url.database is None) so _db_path has the same type as in
__init__; update the from_engine method to set instance._db_path =
Path(engine.url.database) if engine.url.database is not None else None and add
the necessary import for Path, keeping the rest of the method (_engine,
_session_factory) unchanged.

Comment on lines +568 to +606
with self._get_session() as session:
block_count = (
select(
DBMessageBlock.conversation_id,
func.count(DBMessageBlock.id).label("message_count"),
)
.group_by(DBMessageBlock.conversation_id)
.subquery()
)

query = (
select(
DBConversation.id,
DBConversation.title,
func.coalesce(block_count.c.message_count, 0).label(
"message_count"
),
DBConversation.updated_at,
)
.outerjoin(
block_count,
DBConversation.id == block_count.c.conversation_id,
)
.order_by(DBConversation.updated_at.desc())
)

query = self._apply_search_filter(query, search)
query = query.limit(limit).offset(offset)
rows = session.execute(query).all()

return [
{
"id": row.id,
"title": row.title,
"message_count": row.message_count,
"updated_at": row.updated_at,
}
for row in rows
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

message_count is actually a block count, not a message count.

The subquery on lines 569-576 counts DBMessageBlock rows, but each block can contain up to 2 messages (user request + assistant response). The column is labeled "message_count" and displayed as "Messages" in the history dialog. This is misleading — it would be more accurate to label it "block_count" or to count actual DBMessage rows instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/conversation/database/manager.py` around lines 568 - 606, The
subquery currently counts DBMessageBlock rows (DBMessageBlock) and labels the
result "message_count" which is misleading; update the subquery to count actual
DBMessage rows instead by joining DBMessage to DBMessageBlock (counting
DBMessage.id) or alternatively rename the label to "block_count" if you
intentionally want block counts; adjust the select projection and output keys
accordingly (references: DBMessageBlock, DBMessage, DBConversation, the
block_count subquery and the returned "message_count" field in this query
block).

Comment on lines +673 to +681
block = MessageBlock(
request=request_msg,
response=response_msg,
system_index=system_index,
model=AIModelInfo(
provider_id=db_block.model_provider,
model_id=db_block.model_id,
),
temperature=db_block.temperature,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find the AIModelInfo model definition
find . -type f -name "*.py" | xargs rg -l "class AIModelInfo" | head -5

Repository: SigmaNight/basiliskLLM

Length of output: 97


🏁 Script executed:

#!/bin/bash
# Check the content around lines 673-681 in the target file
head -700 basilisk/conversation/database/manager.py | tail -50

Repository: SigmaNight/basiliskLLM

Length of output: 1493


🏁 Script executed:

#!/bin/bash
# Search for provider_must_exist validator
rg -n "provider_must_exist|provider_id.*validator" --type=py -C 3

Repository: SigmaNight/basiliskLLM

Length of output: 588


🏁 Script executed:

#!/bin/bash
# Get the full AIModelInfo class and validator implementation
cat -n basilisk/provider_ai_model.py | head -160 | tail -50

Repository: SigmaNight/basiliskLLM

Length of output: 1919


🏁 Script executed:

#!/bin/bash
# Check what exception the validator raises
rg -A 15 "def provider_must_exist" --type=py basilisk/provider_ai_model.py

Repository: SigmaNight/basiliskLLM

Length of output: 587


🏁 Script executed:

#!/bin/bash
# Check if there's error handling around line 673 (MessageBlock instantiation in manager.py)
cat -n basilisk/conversation/database/manager.py | sed -n '650,695p'

Repository: SigmaNight/basiliskLLM

Length of output: 1686


🏁 Script executed:

#!/bin/bash
# Check if Pydantic's model_construct is used anywhere in the codebase
rg -n "model_construct" --type=py -C 2

Repository: SigmaNight/basiliskLLM

Length of output: 473


🏁 Script executed:

#!/bin/bash
# Find the get_provider function to see what it does
rg -A 10 "^def get_provider" --type=py

Repository: SigmaNight/basiliskLLM

Length of output: 1114


🏁 Script executed:

#!/bin/bash
# Check if there's any try-except handling in load_conversation or its callers
rg -B 5 "load_conversation" --type=py | grep -A 5 "try\|except"

Repository: SigmaNight/basiliskLLM

Length of output: 48


🏁 Script executed:

#!/bin/bash
# Check the full context of load_conversation to see if it's wrapped in try-except
cat -n basilisk/conversation/database/manager.py | sed -n '620,700p'

Repository: SigmaNight/basiliskLLM

Length of output: 2915


🏁 Script executed:

#!/bin/bash
# Check all usages of AIModelInfo instantiation to see if they have the same issue
rg -n "AIModelInfo(" --type=py -C 3

Repository: SigmaNight/basiliskLLM

Length of output: 140


🏁 Script executed:

#!/bin/bash
# Check the get_provider function completely to understand what error it raises
rg -A 20 "^def get_provider\(" --type=py basilisk/provider.py

Repository: SigmaNight/basiliskLLM

Length of output: 608


🏁 Script executed:

#!/bin/bash
# Check all usages of AIModelInfo instantiation with proper escaping
rg -n "AIModelInfo" --type=py -C 3

Repository: SigmaNight/basiliskLLM

Length of output: 9664


🏁 Script executed:

#!/bin/bash
# Check if load_conversation is wrapped in try-except at call sites
rg -B 3 -A 3 "\.load_conversation" --type=py

Repository: SigmaNight/basiliskLLM

Length of output: 17576


🏁 Script executed:

#!/bin/bash
# Check if there's a way to create AIModelInfo without validation (model_construct)
rg -n "class AIModelInfo" --type=py -A 20 basilisk/provider_ai_model.py

Repository: SigmaNight/basiliskLLM

Length of output: 834


Handle missing providers when loading conversations from the database.

When AIModelInfo(provider_id=db_block.model_provider, model_id=db_block.model_id) is instantiated at line 677, the provider_must_exist validator triggers and raises a ValidationError if the provider was uninstalled or removed since the conversation was saved. This makes the conversation unrecoverable without manual database intervention.

Since conversations are loaded from a trusted database source, use AIModelInfo.model_construct(provider_id=db_block.model_provider, model_id=db_block.model_id) to bypass validation, or add error handling to gracefully fall back when the provider is unavailable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/conversation/database/manager.py` around lines 673 - 681, When
constructing the MessageBlock, avoid calling AIModelInfo(...) which triggers
provider_must_exist validation; instead create the model info via
AIModelInfo.model_construct(provider_id=db_block.model_provider,
model_id=db_block.model_id) (or wrap the current AIModelInfo(...) call in a
try/except catching ValidationError and falling back to
AIModelInfo.model_construct) so loading conversations from the DB does not fail
when a provider was uninstalled.

Comment on lines +196 to +212
class DBCitation(Base):
"""Stores citation information for assistant messages."""

__tablename__ = "citations"

id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
message_id: Mapped[int] = mapped_column(
ForeignKey("messages.id", ondelete="CASCADE")
)
position: Mapped[int]
cited_text: Mapped[str | None] = mapped_column(default=None)
source_title: Mapped[str | None] = mapped_column(default=None)
source_url: Mapped[str | None] = mapped_column(default=None)
start_index: Mapped[int | None] = mapped_column(default=None)
end_index: Mapped[int | None] = mapped_column(default=None)

message: Mapped["DBMessage"] = relationship(back_populates="citations")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

DBCitation table lacks an index on message_id.

Unlike DBMessage (which has ix_messages_block) and DBMessageAttachment (which has a unique constraint implying an index on message_id, position), DBCitation has no index on message_id. Queries that load citations by message (e.g., via the citations relationship on DBMessage) will require a full table scan on citations as the table grows.

Proposed fix
 	message: Mapped["DBMessage"] = relationship(back_populates="citations")
+
+	__table_args__ = (Index("ix_citations_message", "message_id"),)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/conversation/database/models.py` around lines 196 - 212, DBCitation
is missing an index on message_id which will cause slow lookups; update the
DBCitation model to add an index on the message_id column (either set
mapped_column(..., index=True) for message_id or add __table_args__ with
Index("ix_citations_message_id", "message_id")) so queries via the
DBMessage.citations relationship use the index; ensure the index name follows
the project convention (e.g., ix_citations_message_id).

Comment on lines 948 to 977

def _auto_save_to_db(self, new_block: MessageBlock):
"""Auto-save the conversation or new block to the database.

Args:
new_block: The newly completed message block to save.
"""
if not config.conf().conversation.auto_save_to_db:
return
if self.private:
return
try:
if self.db_conv_id is None:
self.db_conv_id = self._get_conv_db().save_conversation(
self.conversation
)
else:
block_index = self.conversation.messages.index(new_block)
system_msg = None
if new_block.system_index is not None:
system_msg = self.conversation.systems[
new_block.system_index
]
self._get_conv_db().save_message_block(
self.db_conv_id, block_index, new_block, system_msg
)
except Exception:
log.error(
"Failed to auto-save conversation to database", exc_info=True
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check where conversation.title is set/synced with tab title
echo "=== Searching for conversation.title usage ==="
rg -n "conversation\.title" --type=py -g '**/conversation_tab.py' -A 2 -B 2

echo ""
echo "=== Searching for self.title usage in conversation_tab.py ==="
rg -n "self\.title" --type=py -g '**/conversation_tab.py' -A 2 -B 2 | head -80

Repository: SigmaNight/basiliskLLM

Length of output: 1274


🏁 Script executed:

#!/bin/bash
# Get file size to determine how to read it
echo "=== File size check ==="
wc -l basilisk/gui/conversation_tab.py

echo ""
echo "=== Searching for title property or assignment ==="
rg -n "title\s*=" --type=py -g '**/conversation_tab.py' | head -40

Repository: SigmaNight/basiliskLLM

Length of output: 792


🏁 Script executed:

#!/bin/bash
# Look for the MessageBlock and Conversation class to understand structure
echo "=== Searching for Conversation class definition ==="
fd "conversation.py" --type=py -x head -100 {}

echo ""
echo "=== Look for DBConversation or save_conversation ==="
rg -n "save_conversation|DBConversation" --type=py -C 3 | head -60

Repository: SigmaNight/basiliskLLM

Length of output: 4643


🏁 Script executed:

#!/bin/bash
# Search for Conversation class creation/initialization
echo "=== Looking for Conversation instantiation or title setting ==="
rg -n "Conversation(" --type=py -C 3 -g '**/conversation_tab.py' | head -60

echo ""
echo "=== Looking at lines around 137 where title is passed ==="
sed -n '125,145p' basilisk/gui/conversation_tab.py

Repository: SigmaNight/basiliskLLM

Length of output: 830


🏁 Script executed:

#!/bin/bash
# Search for where title might be updated (title generation, rename, etc.)
echo "=== Searching for title updates/changes ==="
rg -n "_update_title|rename|SetPageText" --type=py -C 2 -g '**/conversation_tab.py' | head -80

Repository: SigmaNight/basiliskLLM

Length of output: 109


🏁 Script executed:

#!/bin/bash
# Look for where conversation.title might be assigned
echo "=== Check Conversation class definition ==="
fd "conversation.py" -path "*basilisk*" -type f | head -5 | xargs -I {} head -200 {}

echo ""
echo "=== Search for title assignment in Conversation ==="
rg -n "self\.title\s*=" --type=py -g '**/conversation/**' -C 2 | head -60

Repository: SigmaNight/basiliskLLM

Length of output: 348


🏁 Script executed:

#!/bin/bash
# Find conversation.py without type flag
echo "=== Finding Conversation class ==="
find basilisk -name "conversation.py" -type f | head -5

echo ""
echo "=== Look at Conversation.open method ==="
rg -n "def open" basilisk -A 5 | head -40

Repository: SigmaNight/basiliskLLM

Length of output: 3141


🏁 Script executed:

#!/bin/bash
# Look at the entire _auto_save_to_db method context
echo "=== Full context of _auto_save_to_db ==="
sed -n '948,1000p' basilisk/gui/conversation_tab.py

Repository: SigmaNight/basiliskLLM

Length of output: 1516


🏁 Script executed:

#!/bin/bash
# Search for where self.conversation might be modified
echo "=== Search for self.conversation assignment/modification ==="
rg -n "self\.conversation\s*=" basilisk/gui/conversation_tab.py

echo ""
echo "=== Check if conversation object title is set anywhere ==="
rg -n "conversation\.title\s*=" basilisk --max-count=10

Repository: SigmaNight/basiliskLLM

Length of output: 327


🏁 Script executed:

#!/bin/bash
# Look at ConversationTab.__init__ to see full initialization
echo "=== ConversationTab.__init__ method ==="
sed -n '160,220p' basilisk/gui/conversation_tab.py

Repository: SigmaNight/basiliskLLM

Length of output: 2321


🏁 Script executed:

#!/bin/bash
# Look for where update_db_title is called
echo "=== Where is update_db_title called? ==="
rg -n "update_db_title" basilisk --max-count=15

echo ""
echo "=== Search for title property setter ==="
rg -n "@.*\.setter" basilisk/gui/conversation_tab.py -A 3

Repository: SigmaNight/basiliskLLM

Length of output: 312


🏁 Script executed:

#!/bin/bash
# Check Conversation class constructor
echo "=== Conversation class __init__ ==="
sed -n '1,150p' basilisk/conversation/conversation_model.py | head -100

Repository: SigmaNight/basiliskLLM

Length of output: 3010


🏁 Script executed:

#!/bin/bash
# Find the Conversation class and its title field
echo "=== Conversation class definition with title field ==="
rg -n "class Conversation" basilisk/conversation/conversation_model.py -A 50 | head -80

echo ""
echo "=== Search for title field in Conversation ==="
rg -n "title.*:" basilisk/conversation/conversation_model.py | head -20

Repository: SigmaNight/basiliskLLM

Length of output: 1969


🏁 Script executed:

#!/bin/bash
# Check when conversation.title gets set for newly created conversations
echo "=== Search for where new conversations get their title ==="
rg -n "\.title = " basilisk/gui/conversation_tab.py -B 3 -A 1

echo ""
echo "=== Check new conversation creation flow ==="
sed -n '1150,1190p' basilisk/gui/main_frame.py

Repository: SigmaNight/basiliskLLM

Length of output: 1709


🏁 Script executed:

#!/bin/bash
# Look at what happens when a new blank conversation is created
echo "=== Check how new conversations are created ==="
rg -n "def new_conversation|Conversation\(" basilisk/gui/conversation_tab.py -A 5 | head -60

Repository: SigmaNight/basiliskLLM

Length of output: 362


_auto_save_to_db: conversation title may not be persisted on first save if not explicitly synced.

When db_conv_id is None, the entire conversation is saved via save_conversation, which persists the conversation.title field. However, for newly created conversations, self.conversation.title remains None while self.title contains the user-facing tab title. If auto-save is triggered before conversation.title is explicitly set (e.g., via renaming or other title updates), the database record will persist with a None title, even though the tab displays a title to the user.

Ensure self.conversation.title is synchronized with self.title during initialization or before the first auto-save, or explicitly pass self.title when calling save_conversation().

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/gui/conversation_tab.py` around lines 948 - 977, _auto_save_to_db
can persist a None conversation.title on first save because self.title (tab
title) isn't synced; before calling save_conversation when db_conv_id is None,
ensure the model title is populated by setting self.conversation.title =
self.conversation.title or self.title (or otherwise copy self.title into the
conversation) or modify the call to save_conversation to accept an explicit
title argument (pass self.title) so the DB record gets the user-facing title;
update the logic inside _auto_save_to_db (references: _auto_save_to_db,
db_conv_id, save_conversation, self.conversation.title, self.title) to perform
this sync prior to persisting.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.claude/settings.local.json:
- Around line 1-7: The committed machine-local config
.claude/settings.local.json should be ignored and removed from the PR: add a
.claude/settings.local.json entry to your repository .gitignore (or add the
entire .claude/ pattern) and stop tracking the file by removing it from Git
index (e.g., git rm --cached .claude/settings.local.json) then commit the
.gitignore update and the removal so the file is not included in history going
forward; ensure you do not delete the local file from disk so your local
settings remain.

In `@pyproject.toml`:
- Line 35: The pyproject.toml currently pins SQLAlchemy to
"sqlalchemy>=2.0,<2.1", which will block the imminent stable 2.1 release and may
break installs because SQLAlchemy 2.1 makes greenlet optional; update the
dependency constraint in pyproject.toml to allow 2.1 (for example
"sqlalchemy>=2.0,<2.2" or "sqlalchemy>=2.0") and then adjust any
packaging/config that forces greenlet into zip_include_packages (remove greenlet
from zip_include_packages or make its inclusion conditional/detected at
build/runtime) so installs won’t fail when SQLAlchemy no longer bundles
greenlet; verify by running your build/install pipeline and any async test
matrix to ensure no runtime import errors.

Comment on lines 1 to 7
{
"permissions": {
"allow": [
"Bash(uv run:*)"
]
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if .claude/settings.local.json is already covered by .gitignore

# Check .gitignore for matching patterns
echo "=== Checking .gitignore for matches ==="
grep -rn "settings.local\|\.local\.json\|\.claude\|^\.claude/" .gitignore 2>/dev/null || echo "No matching gitignore entry found"

# Check if .claude directory has any gitignore entries
echo ""
echo "=== Checking if .claude/.gitignore exists ==="
if [ -f .claude/.gitignore ]; then
  cat .claude/.gitignore
else
  echo ".claude/.gitignore does not exist"
fi

# List all .gitignore files in the repo
echo ""
echo "=== All .gitignore files in repo ==="
find . -name ".gitignore" -type f

# Check if .claude/settings.local.json exists in the repo
echo ""
echo "=== Check if .claude/settings.local.json exists ==="
if [ -f .claude/settings.local.json ]; then
  echo "File exists"
  cat .claude/settings.local.json
else
  echo "File does not exist"
fi

# Check for other .local files to understand the pattern
echo ""
echo "=== Other .local files in repo ==="
find . -name "*.local.json" -o -name "*.local.*" | head -20

Repository: SigmaNight/basiliskLLM

Length of output: 473


Accidental commit of a local developer config — should be gitignored.

.claude/settings.local.json is a machine-local Claude AI permissions override. The .local suffix follows the standard convention for non-shared configs (analogous to .env.local). Committing it adds noise to version control and may cause conflicts for other contributors with different local settings.

Add it to .gitignore and remove it from the PR:

# .gitignore
+.claude/settings.local.json
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/settings.local.json around lines 1 - 7, The committed machine-local
config .claude/settings.local.json should be ignored and removed from the PR:
add a .claude/settings.local.json entry to your repository .gitignore (or add
the entire .claude/ pattern) and stop tracking the file by removing it from Git
index (e.g., git rm --cached .claude/settings.local.json) then commit the
.gitignore update and the removal so the file is not included in history going
forward; ensure you do not delete the local file from disk so your local
settings remain.

"pywin32 (>=311,<312) ; sys_platform == 'win32'",
"pyyaml (>=6.0.3,<6.1)",
"sounddevice (>=0.5.3,<0.6)",
"sqlalchemy>=2.0,<2.1",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Plan for the upcoming SQLAlchemy 2.1 stable release.

2.1.0b1 was released January 21, 2026, meaning the stable 2.1 release is imminent. The <2.1 upper bound will correctly exclude beta but will also block stable 2.1, requiring a constraint update at that point. SQLAlchemy 2.1 makes greenlet a fully optional dependency (asyncio-only), no longer installed by default, which may affect the zip_include_packages entry for greenlet at that upgrade. Worth noting the planned migration effort.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyproject.toml` at line 35, The pyproject.toml currently pins SQLAlchemy to
"sqlalchemy>=2.0,<2.1", which will block the imminent stable 2.1 release and may
break installs because SQLAlchemy 2.1 makes greenlet optional; update the
dependency constraint in pyproject.toml to allow 2.1 (for example
"sqlalchemy>=2.0,<2.2" or "sqlalchemy>=2.0") and then adjust any
packaging/config that forces greenlet into zip_include_packages (remove greenlet
from zip_include_packages or make its inclusion conditional/detected at
build/runtime) so installs won’t fail when SQLAlchemy no longer bundles
greenlet; verify by running your build/install pipeline and any async test
matrix to ensure no runtime import errors.

clementb49 and others added 2 commits February 21, 2026 16:42
DBAttachment uses content-hash deduplication and has no cascade delete
from DBMessageAttachment, so blob rows can accumulate as orphans when
conversations or draft blocks are deleted.

Add ConversationDatabase.cleanup_orphan_attachments() that identifies
unreferenced rows via a LEFT JOIN on DBMessageAttachment and removes them
in a single transaction. Wire it to:
- __init__ (once at startup, to sweep orphans from previous sessions)
- delete_conversation (after each cascade delete commits)

Add TestCleanupOrphanAttachments covering: no-orphan path, full cleanup
after a conversation delete, and shared-attachment survival when only one
of two referencing conversations is removed.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ersation tests

- Add db_message_block fixture to database conftest; drop redundant
  created_at/updated_at kwargs from all DBConversation/DBMessageBlock
  constructions in test_models.py
- Replace _make_block, _make_message_and_attachment, _make_message class
  helpers with the shared db_message_block fixture
- Lift local imports and add _count_attachments helper in test_manager.py
- Add tests/conversation/conftest.py with shared bskc_path and
  storage_path fixtures; remove duplicate definitions from 3 test classes
- Replace 6 numbered fixtures with make_user_block/make_system factories
  in TestConversationWithMultipleBlocks
- Move import io and from PIL import Image to module level in
  test_migration.py

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/conversation/test_migration.py (1)

367-370: ⚠️ Potential issue | 🟡 Minor

storage_path fixture shares a global fsspec MemoryFileSystem with no teardown.

UPath("memory://test_migration") resolves to fsspec's process-global MemoryFileSystem singleton. Because the fixture has no yield/cleanup, attachment files written by test_open_bskc_v2_file_with_attachments persist for the lifetime of the pytest session. While the current test suite is unlikely to cross-contaminate (only one test writes to this path), the isolation guarantee is fragile.

🛡️ Proposed fix: clear the memory path after each test
+import fsspec
+
 `@pytest.fixture`
 def storage_path(self):
     """Return a test storage path."""
-    return UPath("memory://test_migration")
+    path = UPath("memory://test_migration")
+    yield path
+    # Clean up the global MemoryFileSystem to avoid cross-test contamination.
+    fs = fsspec.filesystem("memory")
+    if fs.exists("/test_migration"):
+        fs.rm("/test_migration", recursive=True)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/conversation/test_migration.py` around lines 367 - 370, The
storage_path fixture currently returns UPath("memory://test_migration") which
uses fsspec's global MemoryFileSystem and lacks teardown, so files persist
across tests; change the storage_path fixture to a yield-style fixture that
yields the UPath and then clears the in-memory path in the teardown (e.g., call
the underlying filesystem's remove/rm for the "test_migration" directory via
storage_path.fs.rm(..., recursive=True) or storage_path.rmdir(...) as
appropriate) to ensure test_open_bskc_v2_file_with_attachments and other tests
cannot cross-contaminate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@basilisk/conversation/database/manager.py`:
- Around line 815-836: The local import "from upath import UPath" inside
_load_attachment should be moved to the module's top-level imports to follow
conventional import placement and avoid repeated runtime imports; update the
module import block to include UPath alongside existing imports (used by
_load_attachment and referenced types AttachmentFile/ImageFile) and remove the
in-function import so _load_attachment uses the top-level UPath.
- Around line 244-256: The save path is dropping BSKC/provider citation keys
because ConversationDatabaseManager (manager.py) expects DBCitation-style keys;
add a normalization step on the Message model that runs before persistence
(e.g., Message.normalize_citations()) which maps BSKC/provider keys to the DB
schema (map "text"->"cited_text", "source"->"source_title", "url"->"source_url",
and normalize any provider-specific dict keys produced by
Anthropic._handle_citation) and ensures citations are plain dicts with keys
cited_text, source_title, source_url, start_index, end_index; call
Message.normalize_citations() from the save flow right before the loop that
creates DBCitation (the block that constructs DBCitation with
message_id=db_msg.id...), so persistence consumes normalized keys and no
citation data is lost.

In `@tests/conversation/database/conftest.py`:
- Around line 68-71: The new fixture test_ai_model duplicates the existing
ai_model fixture; remove the test_ai_model definition and update tests that
import or request test_ai_model to use the existing ai_model fixture instead
(search for references to test_ai_model and replace with ai_model), ensuring
there are no name collisions and that AIModelInfo(provider_id="openai",
model_id="test_model") usage remains unchanged; run tests to confirm no missing
fixtures.

In `@tests/conversation/test_model.py`:
- Around line 161-168: The make_system fixture passes
role=MessageRoleEnum.SYSTEM to SystemMessage redundantly; remove that argument
in the inner factory (_make) so it simply returns SystemMessage(content=content)
and rely on SystemMessage's default role field (MessageRoleEnum.SYSTEM),
updating the _make function in the make_system fixture accordingly.
- Around line 149-168: Add explicit type annotations to the fixture functions
and their inner helpers: annotate make_user_block(return type Callable[[str],
MessageBlock]) and its inner _make(parameter content: str) -> MessageBlock,
referencing MessageBlock, Message, ai_model, and MessageRoleEnum.USER; annotate
make_system(return type Callable[[str], SystemMessage]) and its inner
_make(parameter content: str) -> SystemMessage, referencing SystemMessage and
MessageRoleEnum.SYSTEM. Ensure imports/types are available or imported and use
from typing import Callable where needed so both outer fixtures and inner _make
helpers have proper parameter and return type hints.

---

Outside diff comments:
In `@tests/conversation/test_migration.py`:
- Around line 367-370: The storage_path fixture currently returns
UPath("memory://test_migration") which uses fsspec's global MemoryFileSystem and
lacks teardown, so files persist across tests; change the storage_path fixture
to a yield-style fixture that yields the UPath and then clears the in-memory
path in the teardown (e.g., call the underlying filesystem's remove/rm for the
"test_migration" directory via storage_path.fs.rm(..., recursive=True) or
storage_path.rmdir(...) as appropriate) to ensure
test_open_bskc_v2_file_with_attachments and other tests cannot
cross-contaminate.

---

Duplicate comments:
In `@basilisk/conversation/database/manager.py`:
- Around line 715-722: The AIModelInfo(...) constructor triggers
provider_must_exist validation and can raise ValidationError when a provider was
uninstalled, making conversations unrecoverable; update the MessageBlock
construction to build AIModelInfo from trusted DB data using
AIModelInfo.model_construct(...) to bypass validators, or catch
pydantic.ValidationError around AIModelInfo(...) and fall back to
AIModelInfo.model_construct(...) (or a safe default) so loading a conversation
does not fail; refer to MessageBlock creation and AIModelInfo, handle
ValidationError accordingly.
- Around line 610-648: The query currently aggregates DBMessageBlock rows and
labels the result "message_count" which is misleading because each
DBMessageBlock can contain up to two messages; update the implementation in the
method using _get_session/_apply_search_filter and DBConversation to either (A)
change the returned dict key from "message_count" to "block_count" (and update
any consumers) or (B) modify the aggregation to count DBMessage rows instead
(join DBMessage on DBMessage.conversation_id and count DBMessage.id, label it
"message_count"); ensure the selected column label and the returned dict key
match (row.block_count or row.message_count) and adjust ordering/joins
accordingly.

In `@tests/conversation/database/test_manager.py`:
- Around line 208-229: The test function test_list_ordered_by_updated has an
unused fixture parameter test_ai_model (and similarly test_count_with_search
elsewhere); remove the unused fixture from the function signature (or rename it
to _test_ai_model to explicitly mark it unused) so pytest no longer warns about
an unused fixture; update the function signature for
test_list_ordered_by_updated (and test_count_with_search) to omit or underscore
the test_ai_model parameter and run tests to ensure no other references exist.

Comment on lines +244 to +256
# Save citations
if message.citations:
for pos, citation in enumerate(message.citations):
db_citation = DBCitation(
message_id=db_msg.id,
position=pos,
cited_text=citation.get("cited_text"),
source_title=citation.get("source_title"),
source_url=citation.get("source_url"),
start_index=citation.get("start_index"),
end_index=citation.get("end_index"),
)
session.add(db_citation)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check citation keys in test files
rg -n '"text"|"source"|"page"|"url"|"cited_text"|"source_title"|"source_url"' --type=py -g '*test*' -B2 -A2 | head -100

Repository: SigmaNight/basiliskLLM

Length of output: 5452


🏁 Script executed:

# Find test_save_restore.py and examine citation formats
fd test_save_restore.py -x cat {} -n | head -500

Repository: SigmaNight/basiliskLLM

Length of output: 16718


🏁 Script executed:

# Check manager.py for citation handling
fd manager.py basilisk/conversation -x cat {} -n | sed -n '244,256p'

Repository: SigmaNight/basiliskLLM

Length of output: 588


🏁 Script executed:

# Search for citation definitions in provider_engine
rg -n 'citation' --type=py basilisk/provider_engine -B2 -A2 | head -150

Repository: SigmaNight/basiliskLLM

Length of output: 3695


🏁 Script executed:

# Check Message model definition
rg -n 'class Message|citations.*:' --type=py basilisk/conversation -B2 -A5 | head -100

Repository: SigmaNight/basiliskLLM

Length of output: 4400


🏁 Script executed:

# Check what keys Anthropic citation objects have
rg -n "type.*page_location\|char_location" --type=py -B3 -A10 basilisk/provider_engine/anthropic_engine.py

Repository: SigmaNight/basiliskLLM

Length of output: 48


🏁 Script executed:

# Search for any citation key normalization or mapping code
rg -n "cited_text.*citation|citation.*cited_text|text.*source.*citation" --type=py -B2 -A2 | head -80

Repository: SigmaNight/basiliskLLM

Length of output: 2971


🏁 Script executed:

# Check if there's a BSKC to DB migration path
rg -n "\.open\(|Conversation\.open" --type=py -B2 -A5 | grep -A5 "\.bskc"

Repository: SigmaNight/basiliskLLM

Length of output: 572


🏁 Script executed:

# Look for any citation normalization in conversation models
fd conversation_model.py -x cat {} -n | grep -A30 "citations"

Repository: SigmaNight/basiliskLLM

Length of output: 1286


🏁 Script executed:

# Check if test_save_restore citations are ever used with DB manager
rg -n "test_save_conversation_with_citations|test.*citations.*db|db.*citations.*test" --type=py -B3 -A3

Repository: SigmaNight/basiliskLLM

Length of output: 1188


🏁 Script executed:

# Check the Anthropic citation handling more carefully
cat -n basilisk/provider_engine/anthropic_engine.py | sed -n '398,412p'

Repository: SigmaNight/basiliskLLM

Length of output: 605


🏁 Script executed:

# Look for any citation normalization/conversion code in the codebase
rg -n "citation.*\[|dict\(citation\)" --type=py -B3 -A3

Repository: SigmaNight/basiliskLLM

Length of output: 11432


🏁 Script executed:

# Check if there are any tests that load from BSKC and save to DB
rg -n "Conversation\.open.*\.bskc.*db_manager\|db_manager.*Conversation\.open" --type=py -B2 -A5

Repository: SigmaNight/basiliskLLM

Length of output: 48


🏁 Script executed:

# Verify the BSKC format schema by checking how citations are saved in BSKC
rg -n "def save|citations" --type=py basilisk/conversation -B1 -A10 | grep -A10 "def save"

Repository: SigmaNight/basiliskLLM

Length of output: 4345


🏁 Script executed:

# Check if there's any test mixing BSKC and DB usage
fd "test_*.py" basilisk/conversation -x grep -l "bskc_path.*db_manager\|db_manager.*bskc_path" {}

Repository: SigmaNight/basiliskLLM

Length of output: 48


Citation key formats are incompatible between BSKC and database persistence layers, causing data loss.

The BSKC format uses citation keys "text", "source", "page", "url" (verified in test_save_restore.py lines 392–433), while the DB layer extracts only cited_text, source_title, source_url, start_index, and end_index (lines 244–256). When a conversation loaded from .bskc is saved to the DB, citation.get("cited_text") returns None for BSKC-formatted citations, silently dropping all citation data.

Additionally, the Anthropic provider's _handle_citation method converts citation objects via dict(citation), which produces provider-specific keys that will also be lost during DB storage.

Implement one of these solutions:

  1. Normalize citation keys at the Message model level before any persistence operation, or
  2. Store the full citation dict as JSON in the DB to avoid field loss.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/conversation/database/manager.py` around lines 244 - 256, The save
path is dropping BSKC/provider citation keys because ConversationDatabaseManager
(manager.py) expects DBCitation-style keys; add a normalization step on the
Message model that runs before persistence (e.g., Message.normalize_citations())
which maps BSKC/provider keys to the DB schema (map "text"->"cited_text",
"source"->"source_title", "url"->"source_url", and normalize any
provider-specific dict keys produced by Anthropic._handle_citation) and ensures
citations are plain dicts with keys cited_text, source_title, source_url,
start_index, end_index; call Message.normalize_citations() from the save flow
right before the loop that creates DBCitation (the block that constructs
DBCitation with message_id=db_msg.id...), so persistence consumes normalized
keys and no citation data is lost.

Comment on lines +815 to +836
def _load_attachment(
self, db_att: DBAttachment, description: str | None
) -> AttachmentFile | ImageFile | None:
"""Convert a DB attachment to a Pydantic attachment."""
from upath import UPath

if db_att.location_type == AttachmentFileTypes.URL.value:
return self._make_attachment(db_att, UPath(db_att.url), description)

# BLOB attachment - write to memory filesystem
if db_att.blob_data is None:
log.warning("Attachment %s has no blob data", db_att.name)
return None

mem_path = UPath(
f"memory://db_attachment_{db_att.id}/{db_att.name or 'file'}"
)
mem_path.parent.mkdir(parents=True, exist_ok=True)
with mem_path.open("wb") as f:
f.write(db_att.blob_data)

return self._make_attachment(db_att, mem_path, description)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider moving from upath import UPath to the top-level imports.

The deferred import on line 819 works, but UPath is a lightweight import and the module already depends on upath transitively (via AttachmentFile). Top-level placement is more conventional.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/conversation/database/manager.py` around lines 815 - 836, The local
import "from upath import UPath" inside _load_attachment should be moved to the
module's top-level imports to follow conventional import placement and avoid
repeated runtime imports; update the module import block to include UPath
alongside existing imports (used by _load_attachment and referenced types
AttachmentFile/ImageFile) and remove the in-function import so _load_attachment
uses the top-level UPath.

Comment on lines +68 to +71
@pytest.fixture
def test_ai_model():
"""Return a test AI model info."""
return AIModelInfo(provider_id="openai", model_id="test_model")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Duplicate AIModelInfo fixture under a different name.

tests/conftest.py already defines an ai_model fixture with the same AIModelInfo(provider_id="openai", model_id="test_model"). Consider reusing it instead of defining test_ai_model here to reduce duplication.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/conversation/database/conftest.py` around lines 68 - 71, The new
fixture test_ai_model duplicates the existing ai_model fixture; remove the
test_ai_model definition and update tests that import or request test_ai_model
to use the existing ai_model fixture instead (search for references to
test_ai_model and replace with ai_model), ensuring there are no name collisions
and that AIModelInfo(provider_id="openai", model_id="test_model") usage remains
unchanged; run tests to confirm no missing fixtures.

Comment on lines 149 to +168
@pytest.fixture
def first_block(self, ai_model):
"""Return a first message block."""
req_msg = Message(role=MessageRoleEnum.USER, content="First message")
return MessageBlock(request=req_msg, model=ai_model)
def make_user_block(self, ai_model):
"""Factory for creating user message blocks with given content."""

@pytest.fixture
def second_block(self, ai_model):
"""Return a second message block."""
req_msg = Message(role=MessageRoleEnum.USER, content="Second message")
return MessageBlock(request=req_msg, model=ai_model)
def _make(content):
return MessageBlock(
request=Message(role=MessageRoleEnum.USER, content=content),
model=ai_model,
)

@pytest.fixture
def third_block(self, ai_model):
"""Return a third message block."""
req_msg = Message(role=MessageRoleEnum.USER, content="Third message")
return MessageBlock(request=req_msg, model=ai_model)
return _make

@pytest.fixture
def first_system(self):
"""Return a first system message."""
return SystemMessage(
role=MessageRoleEnum.SYSTEM, content="First system instructions"
)
def make_system(self):
"""Factory for creating system messages with given content."""

@pytest.fixture
def second_system(self):
"""Return a second system message."""
return SystemMessage(
role=MessageRoleEnum.SYSTEM, content="Second system instructions"
)
def _make(content):
return SystemMessage(role=MessageRoleEnum.SYSTEM, content=content)

@pytest.fixture
def third_system(self):
"""Return a third system message."""
return SystemMessage(
role=MessageRoleEnum.SYSTEM, content="Third system instructions"
)
return _make
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Factory fixtures lack type annotations on the inner _make helpers.

Per the project's coding guidelines, type hints are expected for all functions. The outer fixture functions (make_user_block, make_system) and the inner _make callables are both missing parameter/return annotations.

♻️ Proposed type-annotated form
 `@pytest.fixture`
 def make_user_block(self, ai_model):
     """Factory for creating user message blocks with given content."""

-    def _make(content):
+    def _make(content: str) -> MessageBlock:
         return MessageBlock(
             request=Message(role=MessageRoleEnum.USER, content=content),
             model=ai_model,
         )

-    return _make
+    return _make


 `@pytest.fixture`
 def make_system(self):
     """Factory for creating system messages with given content."""

-    def _make(content):
-        return SystemMessage(role=MessageRoleEnum.SYSTEM, content=content)
+    def _make(content: str) -> SystemMessage:
+        return SystemMessage(content=content)

-    return _make
+    return _make

Based on coding guidelines: "Use type hints for all public classes and functions".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/conversation/test_model.py` around lines 149 - 168, Add explicit type
annotations to the fixture functions and their inner helpers: annotate
make_user_block(return type Callable[[str], MessageBlock]) and its inner
_make(parameter content: str) -> MessageBlock, referencing MessageBlock,
Message, ai_model, and MessageRoleEnum.USER; annotate make_system(return type
Callable[[str], SystemMessage]) and its inner _make(parameter content: str) ->
SystemMessage, referencing SystemMessage and MessageRoleEnum.SYSTEM. Ensure
imports/types are available or imported and use from typing import Callable
where needed so both outer fixtures and inner _make helpers have proper
parameter and return type hints.

Comment on lines 161 to +168
@pytest.fixture
def first_system(self):
"""Return a first system message."""
return SystemMessage(
role=MessageRoleEnum.SYSTEM, content="First system instructions"
)
def make_system(self):
"""Factory for creating system messages with given content."""

@pytest.fixture
def second_system(self):
"""Return a second system message."""
return SystemMessage(
role=MessageRoleEnum.SYSTEM, content="Second system instructions"
)
def _make(content):
return SystemMessage(role=MessageRoleEnum.SYSTEM, content=content)

@pytest.fixture
def third_system(self):
"""Return a third system message."""
return SystemMessage(
role=MessageRoleEnum.SYSTEM, content="Third system instructions"
)
return _make
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

role=MessageRoleEnum.SYSTEM is redundant in make_system.

SystemMessage already declares role: MessageRoleEnum = Field(default=MessageRoleEnum.SYSTEM), so passing it explicitly adds no value and creates a minor maintenance burden if the default ever changes.

♻️ Proposed simplification
 def _make(content):
-    return SystemMessage(role=MessageRoleEnum.SYSTEM, content=content)
+    return SystemMessage(content=content)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@pytest.fixture
def first_system(self):
"""Return a first system message."""
return SystemMessage(
role=MessageRoleEnum.SYSTEM, content="First system instructions"
)
def make_system(self):
"""Factory for creating system messages with given content."""
@pytest.fixture
def second_system(self):
"""Return a second system message."""
return SystemMessage(
role=MessageRoleEnum.SYSTEM, content="Second system instructions"
)
def _make(content):
return SystemMessage(role=MessageRoleEnum.SYSTEM, content=content)
@pytest.fixture
def third_system(self):
"""Return a third system message."""
return SystemMessage(
role=MessageRoleEnum.SYSTEM, content="Third system instructions"
)
return _make
`@pytest.fixture`
def make_system(self):
"""Factory for creating system messages with given content."""
def _make(content):
return SystemMessage(content=content)
return _make
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/conversation/test_model.py` around lines 161 - 168, The make_system
fixture passes role=MessageRoleEnum.SYSTEM to SystemMessage redundantly; remove
that argument in the inner factory (_make) so it simply returns
SystemMessage(content=content) and rely on SystemMessage's default role field
(MessageRoleEnum.SYSTEM), updating the _make function in the make_system fixture
accordingly.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@basilisk/gui/conversation_tab.py`:
- Around line 1176-1180: Move the assignment to tab.db_conv_id so it happens
after restoring the draft to avoid triggering the draft timer: call
tab._restore_draft_block(draft_block) before setting tab.db_conv_id; this
prevents prompt_panel.prompt_text changes from firing EVT_TEXT which makes
_on_prompt_text_changed see a non-None db_conv_id and start _draft_timer
unnecessarily.
- Line 242: Replace the direct child binding
self.prompt_panel.prompt.Bind(wx.EVT_TEXT, self._on_prompt_text_changed) with
the preferred pattern using the parent binder: call self.Bind(wx.EVT_TEXT,
self._on_prompt_text_changed, self.prompt_panel.prompt) so the wx.CommandEvent
(wx.EVT_TEXT) is bound on the parent while specifying the source widget
(self.prompt_panel.prompt), keeping event propagation and consistency with other
handlers in this class.

---

Duplicate comments:
In `@basilisk/gui/conversation_tab.py`:
- Around line 959-963: The first auto-save persists a conversation with
conversation.title == None; before saving in _auto_save_to_db, ensure the model
has a sensible title by setting self.conversation.title = self.title (or calling
update_db_title) when db_conv_id is None and title is None so
save_conversation(self.conversation) writes the visible tab title; update
references: _auto_save_to_db, save_conversation, update_db_title,
self.conversation.title, self.title, and db_conv_id.
- Around line 1107-1118: _restore_draft_block currently doesn't restore the
system prompt when apply_profile(None, True) returned early; fix by explicitly
restoring system_prompt_txt from the draft_block's system_index inside
_restore_draft_block: if draft_block.system_index is not None, lookup the
corresponding system prompt in the UI's system prompts collection (e.g., the
list or combo used to populate system prompts such as self.system_prompts or the
system prompt combo control) and set self.system_prompt_txt (or select the combo
index) to that value so the original system message is re-applied even when no
profile was loaded; ensure you reference draft_block.system_index and the UI
control (system_prompt_txt / system prompt combo/list) in the change.

self, self.conv_storage_path, self.on_submit
)
sizer.Add(self.prompt_panel, proportion=1, flag=wx.EXPAND)
self.prompt_panel.prompt.Bind(wx.EVT_TEXT, self._on_prompt_text_changed)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Use self.Bind() for event binding per coding guidelines.

self.prompt_panel.prompt.Bind(wx.EVT_TEXT, ...) binds directly on the child widget. Since wx.EVT_TEXT is a wx.CommandEvent derivative that propagates up the hierarchy, the preferred form self.Bind(wx.EVT_TEXT, self._on_prompt_text_changed, self.prompt_panel.prompt) is both correct and consistent with the rest of the file.

♻️ Proposed fix
-		self.prompt_panel.prompt.Bind(wx.EVT_TEXT, self._on_prompt_text_changed)
+		self.Bind(wx.EVT_TEXT, self._on_prompt_text_changed, self.prompt_panel.prompt)

As per coding guidelines: "In wxPython UI code, bind events using self.Bind(wx.EVT_*, self.on_*) consistently."

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self.prompt_panel.prompt.Bind(wx.EVT_TEXT, self._on_prompt_text_changed)
self.Bind(wx.EVT_TEXT, self._on_prompt_text_changed, self.prompt_panel.prompt)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/gui/conversation_tab.py` at line 242, Replace the direct child
binding self.prompt_panel.prompt.Bind(wx.EVT_TEXT, self._on_prompt_text_changed)
with the preferred pattern using the parent binder: call self.Bind(wx.EVT_TEXT,
self._on_prompt_text_changed, self.prompt_panel.prompt) so the wx.CommandEvent
(wx.EVT_TEXT) is bound on the parent while specifying the source widget
(self.prompt_panel.prompt), keeping event propagation and consistency with other
handlers in this class.

Comment on lines +1176 to +1180
tab.db_conv_id = conv_id

if draft_block is not None:
tab._restore_draft_block(draft_block)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Setting db_conv_id before _restore_draft_block triggers an unnecessary draft timer.

When _restore_draft_block sets prompt_panel.prompt_text, EVT_TEXT fires → _on_prompt_text_changed sees a non-None db_conv_id → starts _draft_timer → re-saves the same draft 2 s after open. Swapping the assignment order prevents this redundant round-trip.

♻️ Proposed fix
-		tab.db_conv_id = conv_id
-
 		if draft_block is not None:
 			tab._restore_draft_block(draft_block)

+		tab.db_conv_id = conv_id
 		return tab
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/gui/conversation_tab.py` around lines 1176 - 1180, Move the
assignment to tab.db_conv_id so it happens after restoring the draft to avoid
triggering the draft timer: call tab._restore_draft_block(draft_block) before
setting tab.db_conv_id; this prevents prompt_panel.prompt_text changes from
firing EVT_TEXT which makes _on_prompt_text_changed see a non-None db_conv_id
and start _draft_timer unnecessarily.

@AAClause AAClause merged commit 5873182 into master Feb 23, 2026
11 checks passed
@AAClause AAClause deleted the convDatabase branch February 23, 2026 07:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants