Skip to content

feat(ai): add tool-calling agent (list_chats, get_user_info, read_chat_messages)#22

Open
belyakova-anna wants to merge 1 commit intodevfrom
ai-agent-tools
Open

feat(ai): add tool-calling agent (list_chats, get_user_info, read_chat_messages)#22
belyakova-anna wants to merge 1 commit intodevfrom
ai-agent-tools

Conversation

@belyakova-anna
Copy link
Copy Markdown

Summary

Adds an AI tool-calling agent so the AI can use Telegram data via three tools: list chats, get user info, and read chat messages. The client handles tool calls from the API, runs the right tool, and sends tool results back in a follow-up completion request.

Changes

  • Tool definitions: buildToolsDefinition() defines list_chats, get_user_info, and read_chat_messages for the chat completion request.
  • Tool execution:
    • list_chats: Returns dialogs (chat id, name, type, optional username) with a configurable limit.
    • get_user_info: Resolves user by user_id or username and returns id, name, and username.
    • read_chat_messages: Loads recent messages for a chat via requestRecentForContext and returns sender, date, and text.
  • Request flow: sendCompletionRequest() sends the payload; handleResponse() detects tool_calls, runs processToolCalls(), appends tool-result messages to _lastSentMessages, and calls sendCompletionRequest() again for the next turn.
  • Context loading: Chat context is no longer fetched inside the completion path. ensureContextLoaded(done) is added so callers can preload context (e.g. for the current chat/topic) before starting a completion.
  • System message: prepareSystemMessage() no longer takes or injects chat context; the system prompt is built without it.

@belyakova-anna belyakova-anna requested a review from fedorn March 4, 2026 17:32
@belyakova-anna belyakova-anna self-assigned this Mar 4, 2026
Repository owner deleted a comment from claude bot Mar 5, 2026
@fedorn
Copy link
Copy Markdown
Owner

fedorn commented Mar 5, 2026

@claude Please review this PR and suggest improvements

@claude
Copy link
Copy Markdown

claude bot commented Mar 5, 2026

Claude finished @fedorn's task in 1m 38s —— View job


PR Review: feat(ai): add tool-calling agent

  • Read AGENTS.md for repo instructions
  • Read changed files and PR diff
  • Analyze code and provide review feedback

Overall this is a clean, well-structured addition. The tool-calling loop, the async State pattern in processToolCalls, and the separation of sendCompletionRequest from sendChatCompletion are all solid. Below are the issues and suggestions I found:


Bugs / Correctness

1. No recursion depth limit on tool-calling loop
handleResponseprocessToolCallssendCompletionRequesthandleResponse can recurse indefinitely if the model keeps returning tool calls. A misbehaving model (or a model that loops between tools) will stack up network requests without bound and never set _isWaitingForResponse = false.

Add a max iteration counter (e.g. 10) and bail with an error when exceeded.

openai_client.cpp:128-133 — the recursive call site.

Fix this →

2. reply->deleteLater() called twice on network errors
Both the finished and errorOccurred signals fire when a request fails. Each lambda calls reply->deleteLater(). While Qt tolerates multiple deleteLater() calls on the same object, the handleResponse path will also read from an errored reply. More importantly, handleNetworkError sets _isWaitingForResponse = false but handleResponse then runs too and may call _onError a second time (or process partial data).

openai_client.cpp:312-320 — both signals connected independently.

Recommended fix: only connect finished, and check reply->error() inside it (which you already do in handleResponse). Remove the errorOccurred connection, or guard against double-handling.

Fix this →

3. Network error silently swallowed
In handleResponse, when reply->error() != QNetworkReply::NoError, _isWaitingForResponse is set to false but _onError is never called. The user sees no feedback at all.

openai_client.cpp:79-81

Fix this →

4. displayFrom() may return null
In executeToolReadChatMessages (line 447), item->displayFrom() can return nullptr for channel posts, service messages, etc. Dereferencing it without a null check will crash.

openai_client.cpp:447

Fix this →

5. getChatContext is now dead code
The method getChatContext (lines 181-230) and its declaration in the header (line 58) are retained but never called in the new flow. prepareSystemMessage no longer uses chat context and the tools handle context retrieval. This should be removed to avoid confusion.


Security / Privacy Considerations

6. Tools always sent — no opt-in
The tool definitions are unconditionally included in every request (createRequestBody always calls buildToolsDefinition). This means the AI model can read any chat or list all dialogs without the user explicitly consenting. Consider adding a setting or at minimum a UI indicator that the AI has access to Telegram data via tools.

7. list_chats limit is hard-capped at 50 but the tool description says "default 50"
The clamp at line 360 (std::min(50, ...)) means the model can never request more than 50. This is fine as a safety limit, but the tool description should say "max 50" rather than "default 50" to avoid confusion with the model.


Robustness

8. _lastSentMessages grows unboundedly across tool turns
Each tool-call round appends assistant + tool-result messages to _lastSentMessages, which is then sent again. In a multi-turn tool loop the payload can grow very large. Consider capping the total message count or payload size.

9. OpenRouter headers removed
The previous code sent HTTP-Referer and X-Title headers. These are removed in sendCompletionRequest. If OpenRouter is still the default provider, these headers may be needed for attribution/rate limits.

openai_client.cpp:302-306 — only Authorization and Content-Type are set.

Fix this →


Minor / Style

10. auto should be preferred over explicit types per the repo style guide (AGENTS.md). Several places use explicit types like QByteArray responseData, QJsonParseError parseError, QString aiResponse, QJsonArray toolCalls, etc. These should use auto / const auto.

11. ensureContextLoaded preloads context for the current chat, but the tools (read_chat_messages) load context for arbitrary chats. This means ensureContextLoaded is only useful for the old getChatContext path, which is now dead code. Consider whether ensureContextLoaded is still needed.


Summary

The architecture is sound — the tool-call loop with async state management is well done. The main concerns are: (1) unbounded recursion on tool calls, (2) double signal handling causing potential double error callbacks, (3) null pointer crash risk in displayFrom(), and (4) dead code (getChatContext) that should be cleaned up. Items 1-3 are bugs that should be fixed before merge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants