Conversation
…y, performance, and schema design on servers, databases, and schemas, as appropriate.
…alyse and report on issues in query plans.
WalkthroughAdds a full AI/LLM integration to pgAdmin: provider configs, LLM clients, tooling for read-only DB inspection, NLQ chat and EXPLAIN analysis SSE endpoints, a multi-stage report pipeline (security/performance/design) with streaming, frontend UI panels/components, preferences, tests, and migration/permission updates. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Browser as Browser
participant Backend as Backend
participant Pipeline as ReportPipeline
participant LLM as LLMClient
participant DB as Database
User->>Browser: Request Report (stream)
Browser->>Backend: GET /llm/<report>/.../stream
Backend->>Pipeline: execute_with_progress(context, conn, manager)
Pipeline->>DB: execute queries (create_query_executor)
DB-->>Pipeline: query results
Pipeline->>LLM: chat(planning/analysis prompts + data)
alt LLM returns tool call
LLM-->>Pipeline: tool_calls
Pipeline->>DB: execute tool queries
DB-->>Pipeline: tool results
Pipeline->>LLM: continue chat with tool results
end
loop per section
Pipeline->>Browser: SSE progress events
end
Pipeline->>LLM: chat(synthesis prompt)
LLM-->>Pipeline: final report
Pipeline->>Browser: SSE complete with final report
sequenceDiagram
actor User
participant Browser
participant Backend
participant ChatSvc as chat_with_database
participant Tools as DATABASE_TOOLS
participant DB as PostgreSQL
participant LLM
User->>Browser: Enter NLQ question
Browser->>Backend: POST /sqleditor/nlq/chat/<trans_id>/stream
Backend->>ChatSvc: chat_with_database(message, tools=Tools)
ChatSvc->>LLM: chat(messages + tool specs)
alt tool call requested
LLM-->>ChatSvc: tool_calls
loop each tool_call
ChatSvc->>Tools: execute_tool(name,args)
Tools->>DB: readonly SQL
DB-->>Tools: rows
Tools-->>ChatSvc: tool result
ChatSvc->>LLM: continue chat with tool results
end
end
ChatSvc-->>Backend: assistant final response (stream)
Backend->>Browser: SSE events (thinking, sql, assistant)
Browser->>User: Display SQL and assistant reply
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
web/pgadmin/static/js/components/FormComponents.jsx (1)
932-964: Add callbacks to the dependency array or stabilize them with useCallback.The effect uses
onError,onChange, andoptionsLoadedcallbacks that are not included in the dependency array[optionsReloadBasis, mountId]. SincemountIdnever changes after mount, the effect only runs whenoptionsReloadBasischanges. If parent components pass new callback instances on re-renders (common with inline arrow functions likeonChange={(val) => { ... }}), the effect will capture stale closures. When the async promise resolves and calls these callbacks, it will invoke the old function references rather than the current ones.Either add these callbacks to the dependency array or ensure parent components wrap them in
useCallbackto maintain stable references.
🟡 Minor comments (16)
web/pgadmin/tools/sqleditor/tests/test_explain_analyze_ai.py-71-72 (1)
71-72: Missingsuper().setUp()call in test class.The
BaseTestGenerator.setUpperforms important setup including server connection and mock data processing. The emptysetUpoverride prevents this.Apply this diff:
def setUp(self): - pass + super().setUp()web/pgadmin/tools/sqleditor/tests/test_explain_analyze_ai.py-99-107 (1)
99-107: Fix line length violations flagged by CI pipeline.Lines 101 and 106 exceed the 79-character limit required by pycodestyle.
Apply this diff to fix the formatting:
- mock_check_trans = patch( - 'pgadmin.tools.sqleditor.check_transaction_status', - return_value=(True, None, mock_conn, mock_trans_obj, mock_session) - ) + mock_check_trans = patch( + 'pgadmin.tools.sqleditor.check_transaction_status', + return_value=( + True, None, mock_conn, mock_trans_obj, mock_session + ) + ) else: - mock_check_trans = patch( - 'pgadmin.tools.sqleditor.check_transaction_status', - return_value=(False, 'Transaction ID not found', None, None, None) - ) + mock_check_trans = patch( + 'pgadmin.tools.sqleditor.check_transaction_status', + return_value=( + False, 'Transaction ID not found', None, None, None + ) + )web/pgadmin/llm/static/js/AIReport.jsx-555-563 (1)
555-563: Missing dependencies in useEffect may cause stale closures.The effect references
generateReportandcloseEventSourcebut doesn't list them in the dependency array. This could lead to stale function references.Add the missing dependencies:
useEffect(() => { // Generate report on mount generateReport(); // Cleanup on unmount return () => { closeEventSource(); }; -}, [sid, did, scid, reportCategory, reportType]); +}, [sid, did, scid, reportCategory, reportType, generateReport, closeEventSource]);Both functions are wrapped in
useCallbackso this should be safe.web/pgadmin/llm/tools/database.py-260-264 (1)
260-264: Query manipulation could break certain SQL patterns.The LIMIT injection logic wraps the original query as a subquery, but this approach can fail for queries containing
ORDER BYwithout parentheses, CTEs, orUNIONclauses. The'LIMIT' not in query_uppercheck is also fragile—it could matchLIMITin a string literal or column name.Consider a more robust approach or document the limitations:
# Add LIMIT if not already present and query looks like SELECT query_upper = query.strip().upper() - if query_upper.startswith('SELECT') and 'LIMIT' not in query_upper: - query = f"({query}) AS llm_subquery LIMIT {max_rows + 1}" - query = f"SELECT * FROM {query}" + # Note: This simple wrapping may not work for all query patterns + # (CTEs, UNIONs, etc.). The LLM should ideally include LIMIT itself. + if (query_upper.startswith('SELECT') and + 'LIMIT' not in query_upper and + 'UNION' not in query_upper and + 'WITH' not in query_upper): + query = f"SELECT * FROM ({query}) AS llm_subquery LIMIT {max_rows + 1}"web/pgadmin/llm/prompts/explain.py-12-12 (1)
12-12: Fix line length to comply with PEP 8.Line 12 exceeds the 79-character limit (95 characters). This causes the pycodestyle E501 error flagged by the pipeline.
Apply this diff to fix the line length:
-EXPLAIN_ANALYSIS_PROMPT = """You are a PostgreSQL performance expert integrated into pgAdmin 4. +EXPLAIN_ANALYSIS_PROMPT = """ +You are a PostgreSQL performance expert integrated into pgAdmin 4. Your task is to analyze EXPLAIN plan output and provide actionable optimization recommendations.Based on pipeline failure logs.
web/migrations/versions/add_tools_ai_permission_.py-40-53 (1)
40-53: Potential issue with empty string permissions.The code filters for
permissions.isnot(None)but then checksif permissions:. Ifpermissionsis an empty string, it would pass the SQL filter but fail the Python truthiness check, skipping the update. However, the more concerning case is if there are roles with an empty string that slip through.Consider this more defensive approach:
for row in result: role_id = row[0] permissions = row[1] - if permissions: + if permissions and permissions.strip(): perms_list = permissions.split(',') if 'tools_ai' not in perms_list: perms_list.append('tools_ai')Alternatively, you could add a filter to the SQL query:
result = conn.execute( sa.select(role_table.c.id, role_table.c.permissions) - .where(role_table.c.permissions.isnot(None)) + .where(sa.and_( + role_table.c.permissions.isnot(None), + role_table.c.permissions != '' + )) )web/config.py-1018-1023 (1)
1018-1023: Fix line length to pass linting.The pipeline failed due to line 1019 exceeding 79 characters. Split the comment across multiple lines:
# Docker Model Runner Configuration -# Docker Desktop 4.40+ includes a built-in model runner with an OpenAI-compatible +# Docker Desktop 4.40+ includes a built-in model runner with an +# OpenAI-compatible API. No API key is required. -# API. No API key is required. # URL for the Docker Model Runner API endpoint. Leave empty to disable. # Default value: http://localhost:12434 DOCKER_API_URL = ''web/pgadmin/tools/sqleditor/__init__.py-3107-3109 (1)
3107-3109: Remove trailing blank line at end of file.The pipeline fails due to a blank line at the end of the file (W391).
web/pgadmin/llm/static/js/ai_tools.js-399-403 (1)
399-403: Potential null reference ifgetDockerHandlerreturns undefined.If
getDockerHandleris not available or returnsundefined, callinghandler.focus()will throw an error.let handler = pgBrowser.getDockerHandler?.( BROWSER_PANELS.AI_REPORT_PREFIX, pgBrowser.docker.default_workspace ); + if (!handler) { + pgBrowser.report_error( + gettext('Report'), + gettext('Cannot open report panel.') + ); + return; + } handler.focus();web/pgadmin/tools/sqleditor/__init__.py-3033-3044 (1)
3033-3044: Break long line to comply with PEP 8 line length limit.Line 3044 exceeds 79 characters, causing the CI pipeline to fail.
-Provide your analysis identifying performance bottlenecks and optimization recommendations.""" +Provide your analysis identifying performance bottlenecks and \ +optimization recommendations."""web/pgadmin/llm/providers/ollama.py-234-236 (1)
234-236: Remove redundantreimport inside function.The
remodule is already imported at the top of the file (line 13). This duplicate import is flagged by static analysis.def _parse_response(self, data: dict) -> LLMResponse: """Parse the Ollama API response into an LLMResponse.""" - import re - message = data.get('message', {})web/pgadmin/tools/sqleditor/__init__.py-2742-2752 (1)
2742-2752: Fix pycodestyle violations to pass the CI pipeline.The pipeline is failing due to style issues. Line 2747 requires 2 blank lines before the comment section, and the code block starting at line 2857 exceeds the 79-character limit.
Apply this diff to fix the blank line issue:
return get_user_macros() + # ============================================================================= # Natural Language Query (NLQ) to SQLAlso, break the long regex pattern on line 2857 into multiple lines.
Committable suggestion skipped: line range outside the PR's diff.
web/pgadmin/llm/utils.py-225-257 (1)
225-257: Docstring mentions three providers but implementation supports four.The docstring states
Returns: The provider name ('anthropic', 'openai', 'ollama') or Nonebut the code validates against{'anthropic', 'openai', 'ollama', 'docker'}. Update the docstring to include 'docker'.Returns: - The provider name ('anthropic', 'openai', 'ollama') or None if disabled. + The provider name ('anthropic', 'openai', 'ollama', 'docker') or None if disabled. """web/pgadmin/llm/models.py-100-108 (1)
100-108: Use explicitOptionaltype annotation.PEP 484 prohibits implicit
Optional. The parameter should explicitly declareOptional[list[ToolCall]].@classmethod - def assistant(cls, content: str, - tool_calls: list[ToolCall] = None) -> 'Message': + def assistant(cls, content: str, + tool_calls: Optional[list[ToolCall]] = None) -> 'Message': """Create an assistant message."""web/pgadmin/llm/__init__.py-205-208 (1)
205-208: Fix line too long (pipeline failure).Line 207 exceeds 79 characters, causing the pipeline to fail.
- help_str=gettext( - 'The Ollama model to use. Models are loaded dynamically ' - 'from your Ollama server. You can also type a custom model name.' - ), + help_str=gettext( + 'The Ollama model to use. Models are loaded dynamically ' + 'from your Ollama server. You can also type a custom ' + 'model name.' + ),web/pgadmin/llm/__init__.py-1031-1034 (1)
1031-1034: Remove unusedmanagerparameter.The
managerparameter is not used in the function body, and the server version is already available insecurity_info. Note: This function has no callers in the codebase, so no caller updates are needed.-def _generate_security_report_llm(client, security_info, manager): +def _generate_security_report_llm(client, security_info): """ Use the LLM to analyze the security configuration and generate a report. """
🧹 Nitpick comments (47)
web/pgadmin/static/js/components/FormComponents.jsx (1)
921-922: Consider a more idiomatic pattern for tracking remounts (optional).The
Math.random()approach works correctly and achieves the goal of reloading options on remount. However, a more conventional React pattern would be to use a ref or callback ref to track mount state, or simply rely on the cleanup function behavior.Alternative approach:
- // Force options to reload on component remount (each mount gets a new ID) - const [mountId] = useState(() => Math.random());Then in the useEffect cleanup, you could increment a ref counter if you need to track remounts explicitly, or simply rely on the existing
optionsReloadBasisprop for controlled reloading.That said, the current implementation is functional and the collision risk with
Math.random()is negligible in practice.web/pgadmin/llm/README.md (1)
25-25: Format the URL as code.The bare URL should be enclosed in backticks for proper Markdown formatting.
Apply this diff:
-- `OLLAMA_API_URL`: URL for Ollama server (e.g., 'http://localhost:11434') +- `OLLAMA_API_URL`: URL for Ollama server (e.g., `http://localhost:11434`)web/pgadmin/tools/sqleditor/tests/test_explain_analyze_ai.py (1)
170-172: EmptysetUp/tearDownmethods should callsuper()or be removed.These empty methods override parent behavior without providing value. Either call
super()or remove them to inherit the default behavior.def tearDown(self): - pass + super().tearDown()Or remove them entirely if no custom logic is needed.
Also applies to: 181-183, 198-199
web/pgadmin/tools/sqleditor/static/js/components/sections/NLQChatPanel.jsx (1)
206-210: Handle clipboard API errors gracefully.
navigator.clipboard.writeTextcan throw if permissions are denied or in insecure contexts. Consider adding error handling.- onClick={() => navigator.clipboard.writeText(message.sql)} + onClick={() => { + navigator.clipboard.writeText(message.sql).catch(() => { + // Fallback or silent fail - clipboard may be unavailable + }); + }}web/pgadmin/llm/static/js/SecurityReport.jsx (2)
211-215: UnusedonCloseprop.The
onCloseprop is destructured but aliased to_onCloseand never used. Either implement or remove it.export default function SecurityReport({ sid, did, scid, reportType = 'server', serverName, databaseName, schemaName, - onClose: _onClose + onClose }) {Or remove from propTypes if not needed.
262-265: Missing dependencies in useEffect.The
useEffectreferencesgenerateReportwhich is defined inside the component but not included in the dependency array. This works becausegenerateReportis stable (no useCallback), but could cause stale closure issues.Either wrap
generateReportinuseCallbackand add it to dependencies, or use an inline function:useEffect(() => { - // Generate report on mount - generateReport(); + // Generate report on mount/prop change + const doGenerate = () => { + // Move generateReport logic here or call it + }; + doGenerate(); }, [sid, did, scid, reportType]);web/pgadmin/static/js/components/SelectRefresh.jsx (1)
46-51: Button label is context-specific.The button title is hardcoded to
'Refresh models'which is specific to the LLM configuration use case. SinceSelectRefreshis a generic component, consider making the tooltip configurable via props.-function ChildContent({ cid, helpid, onRefreshClick, isRefreshing, ...props }) { +function ChildContent({ cid, helpid, onRefreshClick, isRefreshing, refreshTitle, ...props }) { return ( <StyledBox> ... <Box className="SelectRefresh-buttonContainer"> <PgIconButton onClick={onRefreshClick} icon={<RefreshIcon />} - title={gettext('Refresh models')} + title={refreshTitle || gettext('Refresh')} disabled={isRefreshing} /> </Box> </StyledBox> ); }Then pass
refreshTitlefromcontrolPropsor as a default.web/pgadmin/llm/static/js/AIReport.jsx (2)
370-375: Polling for theme changes every second is inefficient.A 1-second interval polling
getComputedStyleis wasteful. Consider using aMutationObserveron the body's class/style attributes, or simply run once on mount if theme changes are rare.If theme changes at runtime are unlikely, remove the interval:
useEffect(() => { const updateColors = () => { // ... }; updateColors(); - - // Check periodically in case theme changes - const interval = setInterval(updateColors, 1000); - return () => clearInterval(interval); }, []);Or use
MutationObserverfor efficient detection.
329-333: UnusedonCloseprop (same as SecurityReport).The
onCloseprop is destructured but aliased to_onCloseand never used.web/pgadmin/llm/tools/database.py (4)
147-151: Remove unused variablereadonly_wrapper.The
readonly_wrappertemplate string is defined but never used. The actual implementation below uses separateBEGIN TRANSACTION READ ONLYandROLLBACKstatements.- # Wrap the query in a read-only transaction - # This ensures even if the query tries to modify data, it will fail - readonly_wrapper = """ - BEGIN TRANSACTION READ ONLY; - {query} - ROLLBACK; - """ - # For SELECT queries, we need to handle them differently
87-126: Unusedmanagerparameter in_connect_readonly.The
managerparameter is never used within the function body. Either remove it from the signature or document why it's reserved for future use.-def _connect_readonly(manager, conn, conn_id: str) -> tuple[bool, str]: +def _connect_readonly(conn, conn_id: str) -> tuple[bool, str]:If removed, update all call sites accordingly (lines 253, 306, 431, 539).
80-84: Chain exceptions usingraise ... from efor better traceability.When re-raising as a custom exception, preserve the original exception chain for debugging.
except Exception as e: raise DatabaseToolError( f"Failed to get connection: {str(e)}", code="CONNECTION_ERROR" - ) + ) from eApply the same pattern at lines 204-207.
200-207: Silent exception swallowing during rollback cleanup.The bare
except Exception: passblock hides rollback failures. While cleanup should not propagate errors, consider logging for observability.try: conn.execute_void("ROLLBACK") - except Exception: - pass + except Exception as rollback_err: + # Log but don't propagate - we're already handling an error + import logging + logging.getLogger(__name__).debug( + "Rollback failed during error handling: %s", rollback_err + )Similar patterns exist at lines 282-285, 400-403, 507-510, 660-663.
web/pgadmin/llm/providers/__init__.py (1)
16-16: Consider sorting__all__alphabetically.Static analysis detected that the
__all__list is not sorted alphabetically. While this is a minor style issue, maintaining alphabetical order improves consistency and maintainability.Apply this diff to sort the exports:
-__all__ = ['AnthropicClient', 'OpenAIClient', 'OllamaClient'] +__all__ = ['AnthropicClient', 'OllamaClient', 'OpenAIClient']Based on static analysis hints (Ruff RUF022).
web/pgadmin/llm/prompts/__init__.py (1)
15-15: Consider sorting__all__alphabetically.The static analysis tool suggests alphabetical sorting of
__all__entries. While the current order is acceptable, sorting would improve consistency.Apply this diff:
-__all__ = ['NLQ_SYSTEM_PROMPT', 'EXPLAIN_ANALYSIS_PROMPT'] +__all__ = ['EXPLAIN_ANALYSIS_PROMPT', 'NLQ_SYSTEM_PROMPT']web/pgadmin/llm/tests/test_llm_status.py (1)
19-43: Consider annotating scenarios with ClassVar.The static analysis tool suggests using
typing.ClassVarfor the mutable class attributescenarios. This would improve type safety.+from typing import ClassVar + class LLMStatusTestCase(BaseTestGenerator): """Test cases for LLM status endpoint""" - scenarios = [ + scenarios: ClassVar = [ ('LLM Status - Disabled', dict(web/pgadmin/llm/tests/test_report_endpoints.py (2)
154-189: Consider testing actual streaming content.The streaming test verifies the response type but uses an empty generator. While this validates the SSE format, it doesn't test actual content streaming. Consider adding a test case with mock content:
mock_streaming.return_value = iter(["data: chunk1\n\n", "data: chunk2\n\n"])Then verify the response contains the expected chunks.
192-233: Thesimulate_errorflag is always True - consider simplifying.The
simulate_errorparameter in scenarios is alwaysTrue, making the conditional on line 223 unnecessary. Either remove the conditional or add a scenario wheresimulate_error=Falseto test the happy path within this test class.- scenarios = [ - ('Report with API Error', dict( - simulate_error=True - )), - ] + scenarios = [ + ('Report with API Error', dict()), + ]And simplify the test:
- if self.simulate_error: - mock_generate.side_effect = Exception("API connection failed") + mock_generate.side_effect = Exception("API connection failed")web/pgadmin/preferences/static/js/components/PreferencesHelper.jsx (1)
105-143: Consider extracting the options loader to reduce duplication.The options loading logic between
optionsRefreshUrl(lines 122-143) andoptionsUrl(lines 149-170) branches is nearly identical. Consider extracting a helper function:const createOptionsLoader = (optionsEndpoint, staticOptions) => { return () => { return new Promise((resolve) => { const api = getApiInstance(); const optionsUrl = url_for(optionsEndpoint); api.get(optionsUrl) .then((res) => { if (res.data?.data?.models) { resolve([...res.data.data.models, ...staticOptions]); } else { resolve(staticOptions); } }) .catch(() => { resolve(staticOptions); }); }); }; };Then use it in both branches:
element.options = createOptionsLoader(optionsEndpoint, staticOptions);web/pgadmin/llm/reports/queries.py (1)
851-856: Unusedcontextparameter - consider documenting intent.The
contextparameter is declared but not used. If it's reserved for future scope filtering, consider prefixing with underscore to signal intentional non-use, or add a TODO comment:def execute_query( conn, query_id: str, - context: dict, + context: dict, # Reserved for future scope-based filtering params: Optional[list] = None ) -> dict[str, Any]:Or use the underscore prefix:
- context: dict, + _context: dict,web/pgadmin/llm/tools/__init__.py (1)
22-30: Consider sorting__all__for consistency.Static analysis suggests alphabetically sorting the
__all__list for better maintainability.Apply this diff to sort the list:
__all__ = [ + 'DATABASE_TOOLS', + 'DatabaseToolError', 'execute_readonly_query', + 'execute_tool', 'get_database_schema', 'get_table_columns', - 'get_table_info', - 'execute_tool', - 'DatabaseToolError', - 'DATABASE_TOOLS' + 'get_table_info' ]web/pgadmin/llm/reports/__init__.py (1)
25-37: Consider sorting__all__for consistency.Static analysis suggests alphabetically sorting the
__all__list for better maintainability.Apply this diff to sort the list:
__all__ = [ - 'ReportPipeline', - 'Section', - 'SectionResult', - 'Severity', + 'DESIGN_SECTIONS', + 'PERFORMANCE_SECTIONS', 'SECURITY_SECTIONS', - 'PERFORMANCE_SECTIONS', - 'DESIGN_SECTIONS', - 'get_sections_for_report', + 'ReportPipeline', + 'Section', + 'SectionResult', + 'Severity', + 'execute_query', + 'get_query', 'get_sections_for_scope', - 'get_query', - 'execute_query', + 'get_sections_for_report' ]web/pgadmin/llm/providers/anthropic.py (2)
114-123: Preserve the exception chain for better debugging.The generic exception handler should use
raise ... from eto maintain the exception traceback for debugging purposes.Apply this diff:
except LLMClientError: raise except Exception as e: raise LLMClientError(LLMError( message=f"Request failed: {str(e)}", provider=self.provider_name - )) + )) from e
207-226: Preserve exception chains in error handlers.Both exception handlers should use
raise ... from eto maintain the exception traceback.Apply this diff:
raise LLMClientError(LLMError( message=error_msg, code=str(e.code), provider=self.provider_name, retryable=e.code in (429, 500, 502, 503, 504) - )) + )) from e except urllib.error.URLError as e: raise LLMClientError(LLMError( message=f"Connection error: {e.reason}", provider=self.provider_name, retryable=True - )) + )) from eweb/pgadmin/tools/sqleditor/__init__.py (5)
2793-2800: Prefix unused unpacked variable with underscore.The static analysis correctly identifies that
session_objis unpacked but never used.- status, error_msg, conn, trans_obj, session_obj = \ + status, error_msg, conn, trans_obj, _session_obj = \ check_transaction_status(trans_id)
2913-2918: Uselogging.exceptionfor proper exception logging with traceback.When logging exceptions in error handlers, use
exception()instead oferror()to automatically include the stack trace.except Exception as e: - current_app.logger.error(f'NLQ chat error: {str(e)}') + current_app.logger.exception('NLQ chat error: %s', e) yield _nlq_sse_event({ 'type': 'error', 'message': str(e) })
2998-3005: Prefix unused unpacked variables with underscores.The static analysis correctly identifies that
conn,trans_obj, andsession_objare unpacked but never used inexplain_analyze_stream.- status, error_msg, conn, trans_obj, session_obj = \ + status, error_msg, _conn, _trans_obj, _session_obj = \ check_transaction_status(trans_id)
3088-3093: Uselogging.exceptionfor proper exception logging with traceback.Same issue as the NLQ chat error handler.
except Exception as e: - current_app.logger.error(f'Explain analysis error: {str(e)}') + current_app.logger.exception('Explain analysis error: %s', e) yield _nlq_sse_event({ 'type': 'error', 'message': str(e) })
2819-2836: Consider explicitly handlingRuntimeErrorfromchat_with_databasefor better error messaging.The
chat_with_databasefunction raisesRuntimeErrorwhen the LLM is not configured or if iterations exceed limits. While the broadExceptioncatch handles these cases, explicitRuntimeErrorhandling would allow you to provide more specific error messages to users distinguishing configuration issues from iteration limit scenarios.web/pgadmin/llm/client.py (1)
87-97: Remove unused variable and consider moving return to else block.The
responsevariable is assigned but never used. Additionally, the return statement could be cleaner.def validate_connection(self) -> tuple[bool, Optional[str]]: """ Validate the connection to the LLM provider. Returns: Tuple of (success, error_message). If success is True, error_message is None. """ try: # Try a minimal request to validate the connection - response = self.chat( + self.chat( messages=[Message.user("Hello")], max_tokens=10 ) - return True, None except LLMError as e: return False, str(e) except Exception as e: - return False, f"Connection failed: {str(e)}" + return False, f"Connection failed: {e!s}" + else: + return True, Noneweb/pgadmin/llm/providers/openai.py (3)
119-128: Preserve exception chain withraise ... from.When re-raising exceptions, use
raise ... fromto preserve the exception chain for better debugging.try: response_data = self._make_request(payload) return self._parse_response(response_data) except LLMClientError: raise except Exception as e: raise LLMClientError(LLMError( - message=f"Request failed: {str(e)}", + message=f"Request failed: {e!s}", provider=self.provider_name - )) + )) from e
213-240: Preserve exception chains in HTTP error handling.All the exception handlers should use
raise ... fromto maintain the exception chain.except urllib.error.HTTPError as e: # ... error handling code ... raise LLMClientError(LLMError( message=error_msg, code=str(e.code), provider=self.provider_name, retryable=e.code in (429, 500, 502, 503, 504) - )) + )) from e except urllib.error.URLError as e: raise LLMClientError(LLMError( message=f"Connection error: {e.reason}", provider=self.provider_name, retryable=True - )) + )) from e except socket.timeout: raise LLMClientError(LLMError( message="Request timed out. The request may be too large " "or the server is slow to respond.", code='timeout', provider=self.provider_name, retryable=True - )) + )) from None
78-78: Consider documenting that**kwargsis reserved for future use.The
kwargsparameter is currently unused but may be intentionally reserved for provider-specific parameters. Consider adding a note in the docstring.web/pgadmin/llm/static/js/ai_tools.js (1)
166-180: LLM status check uses fire-and-forget pattern without error feedback.The
checkLLMStatusfunction catches errors silently without logging. While the flags are correctly set tofalse, consider logging the error for debugging purposes..catch(() => { + console.warn('Failed to check LLM status'); this.llmEnabled = false; this.llmSystemEnabled = false; this.llmStatusChecked = true; });web/pgadmin/llm/providers/ollama.py (2)
127-136: Preserve exception chain withraise ... from.Same issue as the OpenAI client - use
raise ... fromto preserve the exception chain.try: response_data = self._make_request(payload) return self._parse_response(response_data) except LLMClientError: raise except Exception as e: raise LLMClientError(LLMError( - message=f"Request failed: {str(e)}", + message=f"Request failed: {e!s}", provider=self.provider_name - )) + )) from e
221-232: Preserve exception chains in HTTP error handling.Same pattern as OpenAI - add
from eorfrom Noneto preserve exception chains.raise LLMClientError(LLMError( message=error_msg, code=str(e.code), provider=self.provider_name, retryable=e.code in (500, 502, 503, 504) - )) + )) from e except urllib.error.URLError as e: raise LLMClientError(LLMError( message=f"Cannot connect to Ollama: {e.reason}", provider=self.provider_name, retryable=True - )) + )) from eweb/pgadmin/llm/utils.py (1)
64-74: Consider logging exceptions for debugging purposes.The broad
except Exception: passpattern silently swallows all errors. While returningNoneis appropriate for missing preferences, logging would help diagnose configuration issues.+import logging + +logger = logging.getLogger(__name__) + def _get_preference_value(name): ... try: pref_module = Preferences.module('ai') if pref_module: pref = pref_module.preference(name) if pref: value = pref.get() if value and str(value).strip(): return str(value).strip() - except Exception: - pass + except Exception as e: + logger.debug("Failed to read preference '%s': %s", name, e) return Noneweb/pgadmin/llm/providers/docker.py (3)
70-72: Availability check may be too permissive.
is_available()only checks if_api_urlis set, but doesn't verify the model runner is actually reachable. Consider whether a connectivity check would be more reliable, or document that this only checks configuration.
125-131: Use exception chaining for better debugging.When re-raising as
LLMClientError, preserve the original exception context usingraise ... from e.except LLMClientError: raise except Exception as e: - raise LLMClientError(LLMError( + raise LLMClientError(LLMError( message=f"Request failed: {str(e)}", provider=self.provider_name - )) + )) from e
197-247: Add exception chaining and consider URL scheme validation.
- Exception chaining (
from e/from None) would preserve debug context.- The URL is user-configurable; validating that it uses
http://orhttps://would prevent unexpected behavior fromfile://or other schemes.+from urllib.parse import urlparse + def _make_request(self, payload: dict) -> dict: """Make an HTTP request to the Docker Model Runner API.""" + # Validate URL scheme + parsed = urlparse(self._api_url) + if parsed.scheme not in ('http', 'https'): + raise LLMClientError(LLMError( + message=f"Invalid URL scheme: {parsed.scheme}. Only http/https supported.", + provider=self.provider_name, + retryable=False + )) + headers = { 'Content-Type': 'application/json' } ... except urllib.error.HTTPError as e: ... raise LLMClientError(LLMError( ... - )) + )) from None except urllib.error.URLError as e: raise LLMClientError(LLMError( ... - )) + )) from e except socket.timeout: raise LLMClientError(LLMError( ... - )) + )) from Noneweb/pgadmin/static/js/Explain/AIInsights.jsx (5)
54-60: Minor inconsistency: hardcoded pixel values vs theme spacing.
ContentAreaandLoadingContaineruse hardcoded pixel values (16px) while other styled components usetheme.spacing(). Consider usingtheme.spacing(2)for consistency, though this is optional.Also applies to: 125-132
366-381: Text color detection may not update on theme change.The
useEffectthat extracts text colors fromdocument.bodyruns only on mount (empty dependency array). If the user changes the theme dynamically, the colors won't update until the component remounts.Consider adding a MutationObserver or listening to theme change events if dynamic theme switching is supported.
400-401: Consider awaitingfetchLlmInfo()before analysis.
fetchLlmInfo()is called withoutawait, meaning the LLM info might not be updated before the analysis starts. If displaying accurate provider/model info during loading is important, consider awaiting it.// Fetch latest LLM provider/model info before running analysis - fetchLlmInfo(); + await fetchLlmInfo();
557-559: Missing error handling for clipboard API.
navigator.clipboard.writeText()can fail (e.g., if the page isn't focused or lacks clipboard permissions). Consider adding error handling.const handleCopySQL = (sqlText) => { - navigator.clipboard.writeText(sqlText); + navigator.clipboard.writeText(sqlText).catch((err) => { + console.warn('Failed to copy to clipboard:', err); + }); };
997-1003: Consider using stable keys instead of array index.Using array index as React keys (
key={idx}) can cause issues if items are reordered or removed. If bottlenecks/recommendations have unique identifiers from the API, use those instead.Also applies to: 1017-1025
web/pgadmin/llm/__init__.py (2)
604-607: Consider exception chaining for better debugging.Using
raise ... from epreserves the original exception context, which helps with debugging.except urllib.error.HTTPError as e: if e.code == 401: - raise Exception('Invalid API key') - raise Exception(f'API error: {e.code}') + raise Exception('Invalid API key') from e + raise Exception(f'API error: {e.code}') from e
266-300: Consider DRY refactor for exposed URL endpoints.The
get_exposed_url_endpointsmethod has a long list that mirrors the route definitions. If routes are added/removed, this list must be manually kept in sync. Consider generating this list programmatically or using a decorator pattern.
web/pgadmin/tools/sqleditor/static/js/components/sections/NLQChatPanel.jsx
Show resolved
Hide resolved
There was a problem hiding this comment.
Unable to test due to API max limit error.
- Fix all PEP8 issues.
- Fix Coderabbit review comments if possible.
- Fixed SonarQube Code smells as much as possible.
GUI Comments:
- Loading an OpenAI model takes more than a minute, and it reloads each time I select the OpenAI option in the preferences.
- Add the @mui/icons-material/AssessmentRounded icon to the reports panel, if feasible.
khushboovashi
left a comment
There was a problem hiding this comment.
I tried testing the AI Assistant with the Ollama (llama3 model), but every time I start it, within 2 to 3 minutes, my machine hangs, and the only option is to restart my machine.
- Fix Python style issues (line lengths, blank lines) - Add DOMPurify for XSS protection in markdown rendering - Add CSRF token to NLQ chat fetch request - Fix SQL injection by using parameterized queries - Fix potential UnboundLocalError in pipeline.py - Remove unused imports and PipelineProgress class - Add default model hints to preferences - Add DEFAULT_API_URL constant for Ollama provider - Update copyright years to 2026
There was a problem hiding this comment.
Actionable comments posted: 0
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (23)
web/pgadmin/llm/providers/ollama.py-235-237 (1)
235-237: Remove redundantimport restatement.
reis already imported at line 13 but is re-imported here. This shadows the module-level import unnecessarily.🐛 Proposed fix
def _parse_response(self, data: dict) -> LLMResponse: """Parse the Ollama API response into an LLMResponse.""" - import re - message = data.get('message', {})web/pgadmin/llm/tests/test_report_endpoints.py-31-66 (1)
31-66: Fix line length issues to resolve pipeline failures.Lines 34 and 51 exceed the maximum line length. Consider breaking long strings and method chains across multiple lines.
Suggested fix
def runTest(self): """Test security report endpoint at server level""" with patch('pgadmin.llm.utils.is_llm_enabled') as mock_enabled, \ - patch('pgadmin.llm.reports.generator.generate_report_sync') as mock_generate, \ + patch('pgadmin.llm.reports.generator.generate_report_sync') \ + as mock_generate, \ patch('pgadmin.utils.driver.get_driver') as mock_get_driver: # Mock database connection mock_conn = MagicMock() mock_conn.connected.return_value = True mock_manager = MagicMock() mock_manager.connection.return_value = mock_conn mock_driver = MagicMock() mock_driver.connection_manager.return_value = mock_manager mock_get_driver.return_value = mock_driver mock_enabled.return_value = self.llm_enabled if self.llm_enabled: - mock_generate.return_value = (True, "# Security Report\n\nNo issues found.") + mock_generate.return_value = ( + True, "# Security Report\n\nNo issues found." + )web/pgadmin/llm/tests/test_report_endpoints.py-81-108 (1)
81-108: Fix line length issues in PerformanceReportDatabaseTestCase.Lines 84, 100, and 102 exceed the maximum line length.
Suggested fix
def runTest(self): """Test performance report endpoint at database level""" with patch('pgadmin.llm.utils.is_llm_enabled') as mock_enabled, \ - patch('pgadmin.llm.reports.generator.generate_report_sync') as mock_generate, \ + patch('pgadmin.llm.reports.generator.generate_report_sync') \ + as mock_generate, \ patch('pgadmin.utils.driver.get_driver') as mock_get_driver: # ... mock setup ... mock_enabled.return_value = self.llm_enabled - mock_generate.return_value = (True, "# Performance Report\n\nOptimization suggestions...") + mock_generate.return_value = ( + True, "# Performance Report\n\nOptimization suggestions..." + ) - url = '/llm/database-performance-report/' + str(self.server_id) + '/' + str(self.db_id) + url = (f'/llm/database-performance-report/' + f'{self.server_id}/{self.db_id}')web/pgadmin/llm/providers/__init__.py-12-16 (1)
12-16: AddDockerClientto__all__exports for consistency.
DockerClientis implemented inweb/pgadmin/llm/providers/docker.pyand used throughout the codebase (e.g., inweb/pgadmin/llm/client.py), but it is missing from the__all__list. This is inconsistent with how other provider clients (AnthropicClient,OpenAIClient,OllamaClient) are exported. AddDockerClientto the exports:__all__ = ['AnthropicClient', 'OpenAIClient', 'OllamaClient', 'DockerClient']Also add the corresponding import at the top of the file.
web/pgadmin/llm/tests/test_report_endpoints.py-204-233 (1)
204-233: Fix line length issue in ReportErrorHandlingTestCase.Line 207 exceeds the maximum line length.
Suggested fix
def runTest(self): """Test report endpoint handles LLM API errors gracefully""" with patch('pgadmin.llm.utils.is_llm_enabled') as mock_enabled, \ - patch('pgadmin.llm.reports.generator.generate_report_sync') as mock_generate, \ + patch('pgadmin.llm.reports.generator.generate_report_sync') \ + as mock_generate, \ patch('pgadmin.utils.driver.get_driver') as mock_get_driver:web/pgadmin/llm/tests/test_report_endpoints.py-124-152 (1)
124-152: Fix line length issues in DesignReportSchemaTestCase.Lines 127, 134, 144, and 146 exceed the maximum line length.
Suggested fix
def runTest(self): """Test design review report endpoint at schema level""" with patch('pgadmin.llm.utils.is_llm_enabled') as mock_enabled, \ - patch('pgadmin.llm.reports.generator.generate_report_sync') as mock_generate, \ + patch('pgadmin.llm.reports.generator.generate_report_sync') \ + as mock_generate, \ patch('pgadmin.utils.driver.get_driver') as mock_get_driver: # Mock connection to return schema name mock_conn = MagicMock() mock_conn.connected.return_value = True mock_conn.db = 'testdb' - mock_conn.execute_dict.return_value = (True, {'rows': [{'nspname': 'public'}]}) + mock_conn.execute_dict.return_value = ( + True, {'rows': [{'nspname': 'public'}]} + ) # ... mock setup ... mock_enabled.return_value = self.llm_enabled - mock_generate.return_value = (True, "# Design Review\n\nSchema structure looks good...") + mock_generate.return_value = ( + True, "# Design Review\n\nSchema structure looks good..." + ) - url = '/llm/schema-design-report/' + str(self.server_id) + '/' + str(self.db_id) + '/' + str(self.schema_id) + url = (f'/llm/schema-design-report/' + f'{self.server_id}/{self.db_id}/{self.schema_id}')web/pgadmin/llm/tests/test_llm_status.py-54-57 (1)
54-57: Fix line too long to resolve pipeline failure.Line 56 exceeds the maximum line length, causing the GitHub Actions style check to fail.
Suggested fix
- with patch('pgadmin.llm.utils.is_llm_enabled') as mock_enabled, \ - patch('pgadmin.llm.utils.is_llm_enabled_system') as mock_system, \ - patch('pgadmin.llm.utils.get_default_provider') as mock_provider, \ - patch('pgadmin.authenticate.mfa.utils.mfa_required', lambda f: f): + with patch('pgadmin.llm.utils.is_llm_enabled') as mock_enabled, \ + patch('pgadmin.llm.utils.is_llm_enabled_system') \ + as mock_system, \ + patch('pgadmin.llm.utils.get_default_provider') \ + as mock_provider, \ + patch('pgadmin.authenticate.mfa.utils.mfa_required', + lambda f: f):web/pgadmin/llm/tests/test_report_endpoints.py-164-190 (1)
164-190: Fix line length issues in StreamingReportTestCase.Lines 167 and 187 exceed the maximum line length.
Suggested fix
def runTest(self): """Test streaming report endpoint uses SSE format""" with patch('pgadmin.llm.utils.is_llm_enabled') as mock_enabled, \ - patch('pgadmin.llm.reports.generator.generate_report_streaming') as mock_streaming, \ + patch('pgadmin.llm.reports.generator.generate_report_streaming') \ + as mock_streaming, \ patch('pgadmin.utils.driver.get_driver') as mock_get_driver: # ... mock setup ... - # SSE endpoints should return 200 and have text/event-stream content type + # SSE endpoints should return 200 and have + # text/event-stream content typeweb/pgadmin/llm/chat.py-62-78 (1)
62-78: Fix line length in docstring to pass CI.Line 68 exceeds the maximum line length (E501 pipeline failure).
🔧 Fix line length
system_prompt: Optional custom system prompt (uses default if None) - max_tool_iterations: Maximum number of tool call rounds (uses preference) + max_tool_iterations: Maximum number of tool call rounds + (uses preference if not specified) provider: Optional LLM provider overrideweb/pgadmin/llm/reports/queries.py-22-836 (1)
22-836: Fix line length violations to pass CI.The pipeline reports 28 lines exceeding the maximum length. These SQL queries need to be reformatted to comply with the project's line length limit.
Key lines to fix (from pipeline errors):
- Lines 73, 126, 146, 164-165, 321-322, 413, 504, 518, 549-551, 576, 589, 596, 600, 614, 619, 637, 655, 659, 666-667, 693, 707, 757, 762
For multi-line SQL strings, break at logical points:
🔧 Example fix pattern for long lines
'schema_acls': { 'sql': """ SELECT n.nspname as schema_name, pg_catalog.pg_get_userbyid(n.nspowner) as owner, n.nspacl as acl FROM pg_namespace n - WHERE n.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast') + WHERE n.nspname NOT IN ( + 'pg_catalog', 'information_schema', 'pg_toast' + ) AND n.nspname NOT LIKE 'pg_temp%'web/pgadmin/llm/reports/queries.py-851-907 (1)
851-907: Fix pipeline failures and unused parameter.The
execute_queryfunction has several issues flagged by the CI pipeline and static analysis:
Pipeline failure: Line 68 in the docstring mentions max_tool_iterations which triggers E501 (also multiple SQL lines exceed 79 chars - see pipeline errors).
Unused parameter: The
contextparameter is never used but is documented as "Execution context (for scope filtering)". Either implement scope filtering or remove the parameter.🔧 Suggested fix for unused parameter
If
contextis intended for future use, prefix with underscore to indicate it's intentionally unused:def execute_query( conn, query_id: str, - context: dict, + _context: dict, params: Optional[list] = None ) -> dict[str, Any]:Or remove it entirely if not needed:
def execute_query( conn, query_id: str, - context: dict, params: Optional[list] = None ) -> dict[str, Any]:The broad
except Exceptionat line 906 is acceptable here since errors are captured and returned in the result dict rather than silently swallowed.web/pgadmin/llm/providers/docker.py-51-60 (1)
51-60: Fix line length violations causing pipeline failure.Lines 51 and 56 exceed the maximum length.
🔧 Suggested fix
- def __init__(self, api_url: Optional[str] = None, model: Optional[str] = None): + def __init__( + self, api_url: Optional[str] = None, model: Optional[str] = None + ): """ Initialize the Docker Model Runner client. Args: - api_url: The Docker Model Runner API URL (default: http://localhost:12434). + api_url: The Docker Model Runner API URL. + Default: http://localhost:12434 model: Optional model name. Defaults to ai/qwen3-coder. """ self._api_url = (api_url or DEFAULT_API_URL).rstrip('/') self._model = model or DEFAULT_MODELweb/pgadmin/llm/providers/openai.py-309-330 (1)
309-330: Fix line length violations causing pipeline failure.Lines 316, 318, and 326 exceed the maximum line length, causing the CI pipeline to fail.
🔧 Proposed fix
# Check for problematic responses if not content and not tool_calls: if stop_reason == StopReason.MAX_TOKENS: input_tokens = usage.input_tokens raise LLMClientError(LLMError( - message=f'Response truncated due to token limit ' - f'(input: {input_tokens} tokens). ' - f'The request is too large for model {self._model}. ' - f'Try using a model with a larger context window, ' - f'or analyze a smaller scope (e.g., a specific schema ' - f'instead of the entire database).', + message=( + f'Response truncated due to token limit ' + f'(input: {input_tokens} tokens). ' + f'The request is too large for model {self._model}. ' + f'Try using a model with a larger context window, ' + f'or analyze a smaller scope (e.g., a specific ' + f'schema instead of the entire database).' + ), code='max_tokens', provider=self.provider_name, retryable=False )) elif finish_reason and finish_reason not in ('stop', 'tool_calls'): raise LLMClientError(LLMError( - message=f'Empty response with finish reason: {finish_reason}', + message=( + f'Empty response with finish reason: {finish_reason}' + ), code=finish_reason, provider=self.provider_name, retryable=False ))web/pgadmin/llm/providers/docker.py-316-336 (1)
316-336: Fix line length violations in error messages.Lines 323 and 332 exceed the maximum length, similar to the OpenAI client.
🔧 Suggested fix
# Check for problematic responses if not content and not tool_calls: if stop_reason == StopReason.MAX_TOKENS: input_tokens = usage.input_tokens raise LLMClientError(LLMError( - message=f'Response truncated due to token limit ' - f'(input: {input_tokens} tokens). ' - f'The request is too large for model {self._model}. ' - f'Try using a model with a larger context window, ' - f'or analyze a smaller scope.', + message=( + f'Response truncated due to token limit ' + f'(input: {input_tokens} tokens). ' + f'The request is too large for model ' + f'{self._model}. Try a model with a larger ' + f'context window, or analyze a smaller scope.' + ), code='max_tokens', provider=self.provider_name, retryable=False )) elif finish_reason and finish_reason not in ('stop', 'tool_calls'): raise LLMClientError(LLMError( - message=f'Empty response with finish reason: {finish_reason}', + message=( + f'Empty response with finish reason: ' + f'{finish_reason}' + ), code=finish_reason, provider=self.provider_name, retryable=False ))web/pgadmin/llm/reports/prompts.py-17-24 (1)
17-24: Fix line length violations in prompt templates.Multiple lines exceed the maximum length limit, causing pipeline failures. For multi-line string prompts, consider using implicit string concatenation or reformatting.
🔧 Suggested fix for PLANNING_SYSTEM_PROMPT
-PLANNING_SYSTEM_PROMPT = """You are a PostgreSQL expert helping to plan a database analysis report. +PLANNING_SYSTEM_PROMPT = ( + """You are a PostgreSQL expert helping to plan a """ + """database analysis report. -Your task is to select which analysis sections are most relevant for the given report type and database context. +Your task is to select which analysis sections are most relevant """ + """for the given report type and database context. Return ONLY a JSON array of section IDs to analyze, ordered by priority. Only include sections that are relevant given the database characteristics. Do not include any explanation, just the JSON array.""" +)Note: Similar reformatting is needed for
SECTION_ANALYSIS_SYSTEM_PROMPT(line 65),SYNTHESIS_SYSTEM_PROMPT(lines 133-154), and the f-strings inget_section_analysis_prompt(line 114) andget_synthesis_prompt(line 189).web/pgadmin/llm/reports/pipeline.py-184-241 (1)
184-241: Fix line length on line 237.The planning logic and fallback handling are solid. Line 237 exceeds the maximum length.
🔧 Proposed fix
- return valid_ids if valid_ids else [s['id'] for s in available_sections] + return ( + valid_ids if valid_ids + else [s['id'] for s in available_sections] + )web/pgadmin/llm/reports/pipeline.py-46-92 (1)
46-92: Fix line length to pass CI.Lines 61 and 91 exceed the maximum line length. The implementation logic is correct.
🔧 Proposed fix for line length
Args: - report_type: Type of report ('security', 'performance', 'design'). + report_type: Type of report + ('security', 'performance', 'design'). sections: List of available Section definitions.- raise ReportPipelineError(event.get('message', 'Unknown error')) + raise ReportPipelineError( + event.get('message', 'Unknown error') + )web/pgadmin/llm/__init__.py-1034-1135 (1)
1034-1135: Fix line length on line 1053.The LLM report generation is well-structured. Line 1053 exceeds the maximum length.
🔧 Proposed fix
- "IMPORTANT: Do NOT include a report title, header block, or " + "IMPORTANT: Do NOT include a report title, header block, " + "or "Or restructure the string concatenation to keep each line under the limit.
web/pgadmin/llm/tools/database.py-727-806 (1)
727-806: Tool definitions are well-documented.The DATABASE_TOOLS list provides clear descriptions and parameter schemas. Fix line lengths at lines 744 and 755.
🔧 Proposed fix for line lengths
"query": { "type": "string", "description": ( - "The SQL query to execute. Should be a SELECT query " - "or other read-only statement. DML statements will fail." + "The SQL query to execute. Should be a SELECT " + "query or other read-only statement. " + "DML statements will fail." ) }Tool( name="get_database_schema", description=( - "Get a list of all schemas, tables, and views in the database. " - "Use this to understand the database structure before writing queries." + "Get a list of all schemas, tables, and views in the " + "database. Use this to understand the database structure " + "before writing queries." ),web/pgadmin/llm/tools/database.py-288-403 (1)
288-403: Schema retrieval is well-implemented.The use of pgAdmin's template infrastructure for version-aware queries is good. The
schema_oidused in the views query (lines 368-375) is safe since it comes from the database's ownpg_namespace.oidcolumn.Fix line lengths at lines 367 and 370.
🔧 Proposed fix for line lengths
# Get views for this schema (relkind v=view, m=materialized view) views_sql = f""" - SELECT c.oid, c.relname AS name, - pg_catalog.obj_description(c.oid, 'pg_class') AS description + SELECT c.oid, c.relname AS name, + pg_catalog.obj_description( + c.oid, 'pg_class' + ) AS description FROM pg_catalog.pg_class c WHERE c.relkind IN ('v', 'm') AND c.relnamespace = {schema_oid}::oid ORDER BY c.relname """web/pgadmin/llm/tools/database.py-145-151 (1)
145-151: Remove unused variablereadonly_wrapper.The variable
readonly_wrapperis defined but never used. This appears to be leftover code.🔧 Proposed fix
# Wrap the query in a read-only transaction # This ensures even if the query tries to modify data, it will fail - readonly_wrapper = """ - BEGIN TRANSACTION READ ONLY; - {query} - ROLLBACK; - """ - # For SELECT queries, we need to handle them differentlyweb/pgadmin/llm/tools/database.py-210-286 (1)
210-286: Fix line length issues to pass CI.Multiple lines exceed the maximum length (240, 367, 370, 489, 594, 611, 744, 755). The read-only query execution logic is correct.
🔧 Example fix for line 240
Raises: - DatabaseToolError: If the query fails or connection cannot be established + DatabaseToolError: If the query fails or connection cannot + be establishedweb/pgadmin/llm/reports/pipeline.py-266-395 (1)
266-395: Fix line length issues in retry methods.The retry logic with exponential backoff and the partial report fallback are well-implemented. Lines 315, 385, 391, and 393 exceed the maximum length.
🔧 Proposed fixes for line lengths
- yield { - 'type': 'retry', - 'reason': 'rate_limit', - 'message': f'Rate limited, retrying in {wait_time}s...', - 'wait_seconds': wait_time - } + yield { + 'type': 'retry', + 'reason': 'rate_limit', + 'message': ( + f'Rate limited, retrying in {wait_time}s...' + ), + 'wait_seconds': wait_time + }For lines 385, 391, 393, apply similar wrapping:
- 'message': f'Rate limited, retrying in {wait_time}s...', + 'message': ( + f'Rate limited, retrying in {wait_time}s...' + ),- partial = "**Note**: Synthesis failed. Section summaries:\n\n" + partial = ( + "**Note**: Synthesis failed. " + "Section summaries:\n\n" + ) for r in successful_results: - partial += f"## {r['section_name']}\n\n{r['summary']}\n\n" + partial += ( + f"## {r['section_name']}\n\n" + f"{r['summary']}\n\n" + )
🧹 Nitpick comments (29)
web/migrations/versions/add_tools_ai_permission_.py (1)
35-53: Fetch all rows before updating to avoid potential cursor issues.Iterating over the result set while executing updates on the same connection can cause cursor invalidation on certain database drivers. Fetching all rows upfront is safer.
♻️ Suggested fix
# Get all roles with permissions conn = op.get_bind() result = conn.execute( sa.select(role_table.c.id, role_table.c.permissions) .where(role_table.c.permissions.isnot(None)) ) # Add tools_ai permission to each role that has permissions - for row in result: + for row in result.fetchall(): role_id = row[0] permissions = row[1]web/pgadmin/llm/prompts/__init__.py (1)
10-15: LGTM! Clean package organization.Good design to centralize prompt template exports. The
__all__could be alphabetically sorted per Ruff's suggestion, but this is optional.♻️ Optional: Sort __all__ alphabetically
-__all__ = ['NLQ_SYSTEM_PROMPT', 'EXPLAIN_ANALYSIS_PROMPT'] +__all__ = ['EXPLAIN_ANALYSIS_PROMPT', 'NLQ_SYSTEM_PROMPT']web/pgadmin/tools/sqleditor/tests/test_explain_analyze_ai.py (2)
153-163: Consider handling non-200 error responses.The error assertion only handles errors returned as
200 OKwith JSON content. If the endpoint returns a non-200 status (e.g., 400, 401, 500), this assertion would skip validation.♻️ Suggested improvement for broader error handling
if self.expected_error: # For error cases, we expect JSON response - if response.status_code == 200 and \ - response.content_type == 'application/json': + # Handle both 200 with error payload and non-200 status codes + if response.content_type == 'application/json': data = json.loads(response.data) - self.assertFalse(data.get('success', True)) + if response.status_code == 200: + self.assertFalse(data.get('success', True)) if hasattr(self, 'error_contains'): self.assertIn( self.error_contains, data.get('errormsg', '') ) + else: + # Non-JSON error response + self.assertNotEqual(response.status_code, 200)
71-72: Empty setUp/tearDown methods can be removed.These empty methods with just
passare unnecessary asBaseTestGeneratorprovides default implementations.Also applies to: 174-175, 185-186, 202-203
web/pgadmin/tools/sqleditor/static/js/components/sections/NLQChatPanel.jsx (2)
207-210: Add error handling for clipboard operations.
navigator.clipboard.writeTextcan fail (e.g., permissions denied, insecure context). Consider adding a catch handler with user feedback.♻️ Suggested fix
<Tooltip title={gettext('Copy to clipboard')}> <IconButton size="small" - onClick={() => navigator.clipboard.writeText(message.sql)} + onClick={() => { + navigator.clipboard.writeText(message.sql).catch(() => { + // Fallback or silent fail - clipboard access may be denied + }); + }} > <ContentCopyIcon fontSize="small" /> </IconButton> </Tooltip>
728-737: Using array index as React key can cause issues.If messages can be filtered or reordered (e.g., removing the thinking message), using
idxas key may cause incorrect component reuse. Consider using a stable identifier.♻️ Suggested fix - add unique IDs to messages
messages.map((msg, idx) => ( <ChatMessage - key={idx} + key={msg.id || `msg-${idx}`} message={msg} onInsertSQL={handleInsertSQL} onReplaceSQL={handleReplaceSQL}And ensure all messages have an
idwhen created (e.g.,id: Date.now()or a UUID).web/pgadmin/llm/static/js/SecurityReport.jsx (2)
263-266: Missing dependency in useEffect may cause stale closure issues.The
generateReportfunction is called inside useEffect but not listed in the dependency array. This could cause the effect to use a stale version ofgenerateReportif its dependencies change.♻️ Suggested fix
useEffect(() => { // Generate report on mount generateReport(); - }, [sid, did, scid, reportType]); + // eslint-disable-next-line react-hooks/exhaustive-deps + }, [sid, did, scid, reportType]); // generateReport intentionally excluded to prevent re-fetch loopsOr restructure to include
generateReportproperly withuseCallback.
212-216: UnusedonCloseprop.The
onCloseprop is declared in PropTypes and destructured as_onClosebut never used. Either implement the close functionality or remove the prop.Also applies to: 377-386
web/pgadmin/llm/providers/ollama.py (2)
131-137: Use exception chaining for better debugging.When re-raising exceptions, use
raise ... from eto preserve the original traceback and clarify the exception chain.♻️ Suggested fix
except LLMClientError: raise except Exception as e: raise LLMClientError(LLMError( - message=f"Request failed: {str(e)}", + message=f"Request failed: {e!s}", provider=self.provider_name - )) + )) from e
222-233: Add exception chaining in HTTP error handlers.Using
raise ... from epreserves the original exception context for debugging.♻️ Suggested fix
raise LLMClientError(LLMError( message=error_msg, code=str(e.code), provider=self.provider_name, retryable=e.code in (500, 502, 503, 504) - )) + )) from e except urllib.error.URLError as e: raise LLMClientError(LLMError( message=f"Cannot connect to Ollama: {e.reason}", provider=self.provider_name, retryable=True - )) + )) from eweb/pgadmin/llm/static/js/AIReport.jsx (2)
371-376: Consider using MutationObserver instead of polling for theme changes.Polling every second with
setIntervalconsumes resources continuously. AMutationObserverondocument.bodyattributes would be more efficient and event-driven.♻️ Alternative approach using MutationObserver
useEffect(() => { const updateColors = () => { const bodyStyles = window.getComputedStyle(document.body); // ... existing color extraction logic }; updateColors(); // Watch for class/style changes on body (theme switches) const observer = new MutationObserver(updateColors); observer.observe(document.body, { attributes: true, attributeFilter: ['class', 'style'] }); return () => observer.disconnect(); }, []);
556-564: Missing dependencies in useEffect and cleanup may not run correctly.The effect calls
generateReport()and returns a cleanup callingcloseEventSource(), but neither function is in the dependency array. Additionally, the cleanup referencescloseEventSourcewhich could be stale.♻️ Suggested fix
useEffect(() => { // Generate report on mount generateReport(); // Cleanup on unmount return () => { closeEventSource(); }; + // eslint-disable-next-line react-hooks/exhaustive-deps }, [sid, did, scid, reportCategory, reportType]);Or use refs for the cleanup function to avoid stale closures:
const closeEventSourceRef = useRef(closeEventSource); closeEventSourceRef.current = closeEventSource; useEffect(() => { generateReport(); return () => closeEventSourceRef.current(); }, [sid, did, scid, reportCategory, reportType, generateReport]);web/pgadmin/llm/models.py (1)
100-108: Use explicit| Noneinstead of implicitOptional.The
tool_callsparameter usesNoneas a default but lacks explicit optional type annotation, which violates PEP 484.Suggested fix
@classmethod - def assistant(cls, content: str, - tool_calls: list[ToolCall] = None) -> 'Message': + def assistant(cls, content: str, + tool_calls: list[ToolCall] | None = None) -> 'Message': """Create an assistant message."""web/pgadmin/llm/tests/test_llm_status.py (1)
11-11: Remove unused import.
mock_openis imported but not used in this test file.Suggested fix
-from unittest.mock import patch, MagicMock, mock_open +from unittest.mock import patch, MagicMockweb/pgadmin/llm/tests/test_report_endpoints.py (1)
13-13: Remove unused import.
utilsis imported but not used in this test file.Suggested fix
-from regression.python_test_utils import test_utils as utilsweb/pgadmin/llm/tools/__init__.py (1)
12-30: LGTM - Clean module API surface.The re-exports properly expose the public API from the database module. The
__all__list correctly matches the imports.Consider sorting
__all__alphabetically for consistency (per static analysis hint RUF022):♻️ Optional: Sort __all__
__all__ = [ + 'DATABASE_TOOLS', + 'DatabaseToolError', 'execute_readonly_query', + 'execute_tool', 'get_database_schema', 'get_table_columns', 'get_table_info', - 'execute_tool', - 'DatabaseToolError', - 'DATABASE_TOOLS' ]web/pgadmin/tools/sqleditor/__init__.py (2)
2820-2919: Consider usinglogging.exceptionfor error logging.The error handling in the generator is reasonable for SSE, but per static analysis (TRY400), use
logging.exceptionto include the stack trace:♻️ Improved error logging
except Exception as e: - current_app.logger.error(f'NLQ chat error: {str(e)}') + current_app.logger.exception('NLQ chat error: %s', e) yield _nlq_sse_event({ 'type': 'error', 'message': str(e) })The broad
except Exceptionis acceptable here since:
- We're in a generator that must not raise
- Errors are sent to the client via SSE
- Errors are logged for debugging
2953-3110: Apply similar improvements to explain_analyze_stream.The
explain_analyze_streamendpoint follows the same pattern asnlq_chat_stream. Apply the samelogging.exceptionimprovement here as well.♻️ Improved error logging
except Exception as e: - current_app.logger.error(f'Explain analysis error: {str(e)}') + current_app.logger.exception('Explain analysis error: %s', e) yield _nlq_sse_event({ 'type': 'error', 'message': str(e) })Also consider prefixing unused unpacked variables per static analysis (lines 2999):
- status, error_msg, conn, trans_obj, session_obj = \ + status, error_msg, _conn, _trans_obj, _session_obj = \ check_transaction_status(trans_id)web/pgadmin/llm/providers/anthropic.py (2)
114-123: Add exception chaining for better debugging.When re-raising exceptions, use
fromto preserve the exception chain. This helps with debugging by showing the original cause.♻️ Add exception chaining
except LLMClientError: raise except Exception as e: raise LLMClientError(LLMError( - message=f"Request failed: {str(e)}", + message=f"Request failed: {e!s}", provider=self.provider_name - )) + )) from e
207-226: Add exception chaining for HTTP errors.The HTTP error handlers should also use exception chaining to preserve context:
♻️ Add exception chaining
raise LLMClientError(LLMError( message=error_msg, code=str(e.code), provider=self.provider_name, retryable=e.code in (429, 500, 502, 503, 504) - )) + )) from e except urllib.error.URLError as e: raise LLMClientError(LLMError( - message=f"Connection error: {e.reason}", + message=f"Connection error: {e.reason!s}", provider=self.provider_name, retryable=True - )) + )) from eweb/pgadmin/llm/client.py (1)
87-97: Minor: Remove unusedresponsevariable assignment.The
responsevariable is assigned but never used. The intent is just to verify the connection succeeds.♻️ Suggested fix
def validate_connection(self) -> tuple[bool, Optional[str]]: """ Validate the connection to the LLM provider. Returns: Tuple of (success, error_message). If success is True, error_message is None. """ try: # Try a minimal request to validate the connection - response = self.chat( + self.chat( messages=[Message.user("Hello")], max_tokens=10 ) return True, None except LLMError as e: return False, str(e) except Exception as e: return False, f"Connection failed: {str(e)}"web/pgadmin/llm/providers/openai.py (1)
122-128: Consider exception chaining for better traceability.Using
raise ... from epreserves the original exception traceback, which aids debugging.♻️ Suggested improvement
try: response_data = self._make_request(payload) return self._parse_response(response_data) except LLMClientError: raise except Exception as e: - raise LLMClientError(LLMError( + raise LLMClientError(LLMError( message=f"Request failed: {str(e)}", provider=self.provider_name - )) + )) from eweb/pgadmin/llm/reports/prompts.py (1)
110-112: Consider moving import to module level.The
jsonimport inside the function works but is unconventional. Moving it to the top of the file improves readability.♻️ Suggested change
Add at the top of the file after line 10:
import jsonThen remove line 110:
- import json - data_json = json.dumps(data, indent=2, default=str)web/pgadmin/llm/utils.py (1)
64-74: Consider adding debug logging for silent exceptions.The
try-except-passpattern is appropriate for graceful degradation, but silent failures can make debugging configuration issues difficult. Consider adding debug-level logging.♻️ Suggested improvement
+import logging + +logger = logging.getLogger(__name__) + def _get_preference_value(name): """ Get a preference value, returning None if empty or not set. ... """ try: pref_module = Preferences.module('ai') if pref_module: pref = pref_module.preference(name) if pref: value = pref.get() if value and str(value).strip(): return str(value).strip() except Exception: - pass + logger.debug("Failed to get preference '%s'", name, exc_info=True) return Noneweb/pgadmin/llm/reports/__init__.py (1)
17-37: LGTM! Module facade correctly consolidates public API.The re-exports from submodules provide a clean public interface for the LLM reports package. Consider sorting
__all__alphabetically for consistency with isort conventions.♻️ Optional: Sort `__all__` alphabetically
__all__ = [ + 'DESIGN_SECTIONS', + 'PERFORMANCE_SECTIONS', 'ReportPipeline', + 'SECURITY_SECTIONS', 'Section', 'SectionResult', 'Severity', - 'SECURITY_SECTIONS', - 'PERFORMANCE_SECTIONS', - 'DESIGN_SECTIONS', + 'execute_query', + 'get_query', 'get_sections_for_report', 'get_sections_for_scope', - 'get_query', - 'execute_query', ]web/pgadmin/llm/reports/pipeline.py (1)
397-454: LGTM! Helper methods are well-implemented.The retry logic is consistent, and severity extraction handles both text patterns and emoji indicators. Consider adding return type hint to
_call_llm_with_retryfor completeness.web/pgadmin/llm/__init__.py (2)
586-638: Consider chaining exceptions for better traceability.The model fetching logic is correct. Consider using
raise ... from eto preserve the exception chain for debugging.♻️ Optional: Chain exceptions
except urllib.error.HTTPError as e: if e.code == 401: - raise Exception('Invalid API key') - raise Exception(f'API error: {e.code}') + raise Exception('Invalid API key') from e + raise Exception(f'API error: {e.code}') from e
907-1031:_gather_security_confighas unusedmanagerparameter.The function collects security configuration data correctly. However, the
managerparameter on line 907 is never used. Consider removing it or prefixing with underscore if kept for API consistency.♻️ Proposed fix
-def _gather_security_config(conn, manager): +def _gather_security_config(conn, _manager): """ Gather security-related configuration from the PostgreSQL server. """Or remove it entirely if callers can be updated.
web/pgadmin/llm/tools/database.py (1)
87-126: Unusedmanagerparameter.The
managerparameter is passed but never used. Consider removing it or prefixing with underscore.♻️ Proposed fix
-def _connect_readonly(manager, conn, conn_id: str) -> tuple[bool, str]: +def _connect_readonly(_manager, conn, conn_id: str) -> tuple[bool, str]:Or remove the parameter and update callers.
- Add comment to empty setUp() method in test_llm_status.py - Rename unused parameter to _manager in database.py - Fix type hint to tuple[bool, str | None] in database.py - Remove unused Response import from llm/__init__.py - Remove unused response variable in client.py - Remove unused LLMClientError and LLMResponse imports from chat.py - Update copyright year to 2026 in ai_tools.js
There was a problem hiding this comment.
Actionable comments posted: 11
🤖 Fix all issues with AI agents
In @web/pgadmin/llm/__init__.py:
- Around line 1041-1065: The system_prompt multi-line string in
web/pgadmin/llm/__init__.py (variable system_prompt) has a line exceeding the
79-character limit; split the long literal lines (particularly the sentence
around "Be specific and actionable in your recommendations. Include the ") into
shorter concatenated string pieces so every source line is ≤79 characters,
preserving the exact text and spacing/line breaks in the resulting string and
keeping it inside the existing parentheses.
- Around line 1034-1134: The function _generate_security_report_llm defines an
unused parameter manager (ARG001) and appears unused across the codebase; remove
the unused parameter from the function signature (drop manager) and update any
callers to match, and if no callers exist, delete the entire
_generate_security_report_llm function to eliminate dead code; ensure references
to Message and json remain covered by other functioning code paths (or move
relevant logic into generate_report_sync if needed) before removing.
In @web/pgadmin/llm/chat.py:
- Around line 45-78: The docstring for chat_with_database contains a line that
exceeds the 79-character limit; edit the docstring to wrap long sentences so
each line is ≤79 chars (for example split the long sentence about tool call
rounds and continuing until a final response into two lines or reformat the
"Returns"/"Raises" paragraphs into shorter lines), ensuring the content and
wording of the docstring remain unchanged and the function signature and return
types (tuple[str, list[Message]]) are preserved.
In @web/pgadmin/llm/tests/test_llm_status.py:
- Around line 55-58: The long with-statement chaining multiple patches
(patch('pgadmin.llm.utils.is_llm_enabled'),
patch('pgadmin.llm.utils.is_llm_enabled_system'),
patch('pgadmin.llm.utils.get_default_provider'),
patch('pgadmin.authenticate.mfa.utils.mfa_required', lambda f: f)) exceeds the
79-character limit; break it into a readable, PEP8-compliant form by either
using unittest.mock.patch.multiple to patch the three pgadmin.llm.utils targets
and a separate patch for mfa_required, or by using a parenthesized multi-line
with statement or nested with blocks around the same targets so each patch call
sits on its own line while preserving the same mocked names (mock_enabled,
mock_system, mock_provider) and behavior in the test_llm_status.py test.
In @web/pgadmin/llm/tools/database.py:
- Around line 238-241: The docstring under the Raises section exceeds the
79-character limit; wrap the long description for the DatabaseToolError entry
into one or two shorter lines (for example split "DatabaseToolError: If the
query fails or connection cannot be established" into multiple lines) so that no
line in the docstring exceeds 79 characters, and ensure the surrounding
triple-quoted docstring formatting remains intact for the function/class where
this Raises block appears.
- Around line 367-375: The f-string assigned to views_sql has lines that exceed
the 79-character limit; reformat the multi-line SQL string to wrap long lines
(or build it with implicit concatenation using parentheses) so each source line
stays within 79 chars while preserving the f-string interpolation of schema_oid;
update the assignment to views_sql accordingly and keep the SQL content and the
schema_oid::oid interpolation unchanged.
- Around line 485-493: The dict construction for each column in the loop
produces a line that exceeds the 79-character limit; refactor the dict literal
inside the for loop (in the block that builds columns from cols_res rows) by
breaking long key/value pairs onto separate lines (e.g., put each 'name',
'data_type', 'not_null', 'has_default', and 'description' entries on their own
line or assign row.get(...) results to short temporaries) so no source line is
longer than 79 characters while preserving the same keys and values.
- Around line 600-615: The SQL assigned to constraints_sql exceeds the 79-char
line length; split the long line that contains "WHERE con.conrelid =
{table_oid}::oid" into shorter pieces so each line is <=79 chars (for example
move the WHERE clause onto its own line or use implicit string concatenation or
a short f-string addition) while keeping the f-string interpolation of table_oid
intact; update the constraints_sql definition to use multiple shorter literal
lines (referencing constraints_sql and table_oid) so CI line-length checks pass.
- Around line 588-598: The dict construction for each column produces a line
longer than 79 chars; fix by assigning long expressions to short local variables
before building the dict (e.g., compute data_type = row.get('displaytypname') or
row.get('datatype') and compute not_null/has_default similarly if needed), then
use those variables inside the columns.append call so no single source line
exceeds the limit; update the block around conn.execute_dict(columns_sql), the
loop over cols_res.get('rows', []), and the columns.append call to use these
intermediate variables.
- Around line 751-762: The description string for the Tool with
name="get_database_schema" is longer than 79 characters; break it into shorter
concatenated string literals or split into multiple lines inside the existing
parentheses so no single source line exceeds 79 chars (e.g., split the sentence
into two quoted pieces joined by implicit string concatenation within the
parentheses) and ensure the resulting code still assigns the full description to
the description parameter of Tool.
- Around line 738-750: The description string under the JSON schema for the
"query" property (inside the dict assigned to properties -> "query" ->
"description") exceeds the 79-character limit; split the long description into
shorter string literals concatenated across multiple lines (or use implicit
adjacent string literals) so each source line stays under 79 chars while
preserving the same text: e.g., break before "or other read-only statement" and
ensure the "required": ["query"] and surrounding structure (the schema dict)
remain unchanged.
🧹 Nitpick comments (11)
web/pgadmin/llm/__init__.py (2)
586-638: Consider validating URL scheme before opening.The
_fetch_anthropic_modelsand similar functions useurllib.request.urlopenon hardcoded URLs, which is safe. However,_fetch_ollama_modelsand_fetch_docker_modelsconstruct URLs from user-providedapi_urlvalues. While the URL is validated to some degree by the URL normalization, consider explicitly validating the scheme to prevent potential SSRF if a malicious URL scheme is provided.Example scheme validation
def _validate_url_scheme(url: str) -> bool: """Validate URL uses an allowed scheme.""" from urllib.parse import urlparse parsed = urlparse(url) return parsed.scheme in ('http', 'https')Then use before constructing the request URL in
_fetch_ollama_modelsand_fetch_docker_models.
607-610: Use exception chaining for better traceability.When re-raising exceptions, use
raise ... from eto preserve the exception chain.Proposed fix
except urllib.error.HTTPError as e: if e.code == 401: - raise Exception('Invalid API key') - raise Exception(f'API error: {e.code}') + raise Exception('Invalid API key') from e + raise Exception(f'API error: {e.code}') from eweb/pgadmin/llm/tests/test_llm_status.py (2)
19-43: Consider adding Docker provider test case.The scenarios cover Anthropic, OpenAI, and Ollama, but Docker Model Runner is missing. For completeness, add a test case for the Docker provider.
Add Docker test scenario
('LLM Status - Ollama Enabled', dict( url='/llm/status', provider_enabled=True, expected_enabled=True, provider_name='ollama' )), + ('LLM Status - Docker Enabled', dict( + url='/llm/status', + provider_enabled=True, + expected_enabled=True, + provider_name='docker' + )), ]
45-47: EmptysetUpmay unintentionally skip parent setup.The comment says "no setup needed," but
BaseTestGenerator.setUp()performs important initialization like settingself.server_idand connecting to the server. Since these tests mock LLM utilities and don't need database access, this is likely intentional, but callingsuper().setUp()inside a try/except or explicitly documenting this deviation would be clearer.web/pgadmin/llm/static/js/ai_tools.js (2)
166-181: LLM status check silently fails.If the status check fails,
llmEnabledandllmSystemEnabledare set tofalse, but there's no logging or indication of the failure. Consider adding a console warning for debugging purposes.Add logging on failure
.catch(() => { + console.warn('Failed to check LLM status - AI features will be disabled'); this.llmEnabled = false; this.llmSystemEnabled = false; this.llmStatusChecked = true; });
398-403: Consider null check for docker handler.
pgBrowser.getDockerHandleruses optional chaining (?.), buthandler.focus()andhandler.dockerare called immediately after without checking ifhandleris null. IfgetDockerHandlerreturns undefined, this will throw.Add null check
let handler = pgBrowser.getDockerHandler?.( BROWSER_PANELS.AI_REPORT_PREFIX, pgBrowser.docker.default_workspace ); + if (!handler) { + pgBrowser.report_error( + gettext('Report'), + gettext('Unable to open report panel.') + ); + return; + } handler.focus();web/pgadmin/llm/client.py (1)
79-98:validate_connectionmay be expensive for routine checks.This method makes an actual chat request to validate the connection. While thorough, this could be slow and consume API tokens. Consider documenting this behavior or providing a lighter-weight alternative for providers that support it (e.g., a models list endpoint).
web/pgadmin/llm/tools/database.py (4)
80-84: Add exception chaining to preserve stack trace.When re-raising as
DatabaseToolError, chain the original exception to preserve debugging context.Proposed fix
except Exception as e: - raise DatabaseToolError( - f"Failed to get connection: {str(e)}", + raise DatabaseToolError( + f"Failed to get connection: {e!s}", code="CONNECTION_ERROR" - ) + ) from e
145-151: Remove unused variablereadonly_wrapper.This variable is defined but never used.
Proposed fix
# Wrap the query in a read-only transaction # This ensures even if the query tries to modify data, it will fail - readonly_wrapper = """ - BEGIN TRANSACTION READ ONLY; - {query} - ROLLBACK; - """ - - # For SELECT queries, we need to handle them differently - # We'll set the transaction to read-only, execute, then rollback + # We'll set the transaction to read-only, execute, then rollback
196-207: Add exception chaining and consider logging rollback failures.The exception handling should chain the original exception. While the silent rollback failure is defensive, consider at least logging it for debugging purposes.
Proposed fix
except DatabaseToolError: raise except Exception as e: # Attempt rollback on any error try: conn.execute_void("ROLLBACK") - except Exception: - pass - raise DatabaseToolError( - f"Query execution error: {str(e)}", + except Exception as rollback_err: + # Log but don't mask the original error + import logging + logging.getLogger(__name__).debug( + f"Rollback failed during error handling: {rollback_err}" + ) + raise DatabaseToolError( + f"Query execution error: {e!s}", code="EXECUTION_ERROR" - ) + ) from e
260-264: LIMIT injection is safe but consider validation.The
max_rowsparameter is typed asintwith a default value, so the SQL construction is safe from injection. However, consider adding explicit validation to guard against negative values or excessively large limits.Optional: Add bounds validation
+ # Validate max_rows bounds + if max_rows < 1: + max_rows = 1 + elif max_rows > 10000: + max_rows = 10000 + # Add LIMIT if not already present and query looks like SELECT query_upper = query.strip().upper()
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
web/pgadmin/llm/__init__.pyweb/pgadmin/llm/chat.pyweb/pgadmin/llm/client.pyweb/pgadmin/llm/static/js/ai_tools.jsweb/pgadmin/llm/tests/test_llm_status.pyweb/pgadmin/llm/tools/database.py
🧰 Additional context used
🧬 Code graph analysis (5)
web/pgadmin/llm/chat.py (4)
web/pgadmin/llm/client.py (3)
get_llm_client(108-190)is_llm_available(193-204)chat(51-77)web/pgadmin/llm/models.py (5)
Message(68-122)StopReason(25-32)user(96-98)to_message(172-177)tool_result(111-122)web/pgadmin/llm/tools/database.py (2)
execute_tool(666-724)DatabaseToolError(41-47)web/pgadmin/llm/utils.py (1)
get_max_tool_iterations(287-308)
web/pgadmin/llm/tests/test_llm_status.py (6)
web/pgadmin/utils/route.py (1)
BaseTestGenerator(134-188)web/pgadmin/llm/client.py (1)
provider_name(30-32)web/pgadmin/llm/providers/anthropic.py (1)
provider_name(60-61)web/pgadmin/llm/providers/ollama.py (1)
provider_name(51-52)web/pgadmin/llm/providers/openai.py (1)
provider_name(60-61)web/pgadmin/llm/providers/docker.py (1)
provider_name(63-64)
web/pgadmin/llm/tools/database.py (3)
web/pgadmin/llm/models.py (1)
Tool(126-138)web/pgadmin/utils/driver/psycopg3/server_manager.py (1)
connection(201-281)web/pgadmin/utils/driver/psycopg3/connection.py (3)
connected(1405-1410)execute_2darray(1223-1280)execute_dict(1282-1348)
web/pgadmin/llm/__init__.py (3)
web/pgadmin/llm/utils.py (3)
is_llm_enabled(274-284)is_llm_enabled_system(261-271)get_default_provider(225-258)web/pgadmin/llm/reports/generator.py (2)
generate_report_sync(164-225)generate_report_streaming(80-161)web/pgadmin/llm/models.py (2)
Message(68-122)user(96-98)
web/pgadmin/llm/static/js/ai_tools.js (3)
web/pgadmin/llm/static/js/SecurityReport.jsx (1)
api(222-222)web/pgadmin/llm/static/js/AIReport.jsx (2)
api(378-378)AIReport(330-755)web/pgadmin/static/js/api_instance.js (1)
getApiInstance(16-24)
🪛 GitHub Actions: Check Python style
web/pgadmin/llm/chat.py
[error] 68-68: pycodestyle: line too long (81 > 79 characters)
web/pgadmin/llm/tests/test_llm_status.py
[error] 57-57: pycodestyle: line too long (80 > 79 characters)
web/pgadmin/llm/tools/database.py
[error] 240-240: pycodestyle: line too long (81 > 79 characters)
[error] 367-367: pycodestyle: line too long (81 > 79 characters)
[error] 370-370: pycodestyle: line too long (87 > 79 characters)
[error] 489-489: pycodestyle: line too long (82 > 79 characters)
[error] 594-594: pycodestyle: line too long (86 > 79 characters)
[error] 611-611: pycodestyle: line too long (80 > 79 characters)
[error] 744-744: pycodestyle: line too long (81 > 79 characters)
[error] 755-755: pycodestyle: line too long (83 > 79 characters)
web/pgadmin/llm/__init__.py
[error] 1053-1053: pycodestyle: line too long (80 > 79 characters)
🪛 Ruff (0.14.10)
web/pgadmin/llm/chat.py
80-81: Avoid specifying long messages outside the exception class
(TRY003)
85-85: Avoid specifying long messages outside the exception class
(TRY003)
138-138: Do not catch blind exception: Exception
(BLE001)
142-142: Use explicit conversion flag
Replace with conversion flag
(RUF010)
150-152: Avoid specifying long messages outside the exception class
(TRY003)
web/pgadmin/llm/tests/test_llm_status.py
19-43: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
web/pgadmin/llm/tools/database.py
78-78: Consider moving this statement to an else block
(TRY300)
80-80: Do not catch blind exception: Exception
(BLE001)
81-84: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
81-84: Avoid specifying long messages outside the exception class
(TRY003)
82-82: Use explicit conversion flag
Replace with conversion flag
(RUF010)
122-122: Consider moving this statement to an else block
(TRY300)
124-124: Do not catch blind exception: Exception
(BLE001)
147-147: Local variable readonly_wrapper is assigned to but never used
Remove assignment to unused variable readonly_wrapper
(F841)
161-164: Abstract raise to an inner function
(TRY301)
161-164: Avoid specifying long messages outside the exception class
(TRY003)
171-174: Avoid specifying long messages outside the exception class
(TRY003)
198-198: Do not catch blind exception: Exception
(BLE001)
202-203: try-except-pass detected, consider logging the exception
(S110)
202-202: Do not catch blind exception: Exception
(BLE001)
204-207: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
204-207: Avoid specifying long messages outside the exception class
(TRY003)
205-205: Use explicit conversion flag
Replace with conversion flag
(RUF010)
255-258: Avoid specifying long messages outside the exception class
(TRY003)
264-264: Possible SQL injection vector through string-based query construction
(S608)
284-285: try-except-pass detected, consider logging the exception
(S110)
284-284: Do not catch blind exception: Exception
(BLE001)
308-309: Avoid specifying long messages outside the exception class
(TRY003)
331-332: Avoid specifying long messages outside the exception class
(TRY003)
337-338: Avoid specifying long messages outside the exception class
(TRY003)
368-375: Possible SQL injection vector through string-based query construction
(S608)
402-403: try-except-pass detected, consider logging the exception
(S110)
402-402: Do not catch blind exception: Exception
(BLE001)
433-434: Avoid specifying long messages outside the exception class
(TRY003)
444-450: Possible SQL injection vector through string-based query construction
(S608)
454-455: Avoid specifying long messages outside the exception class
(TRY003)
460-463: Avoid specifying long messages outside the exception class
(TRY003)
482-483: Avoid specifying long messages outside the exception class
(TRY003)
509-510: try-except-pass detected, consider logging the exception
(S110)
509-509: Do not catch blind exception: Exception
(BLE001)
541-542: Avoid specifying long messages outside the exception class
(TRY003)
552-558: Possible SQL injection vector through string-based query construction
(S608)
562-563: Avoid specifying long messages outside the exception class
(TRY003)
568-571: Avoid specifying long messages outside the exception class
(TRY003)
601-615: Possible SQL injection vector through string-based query construction
(S608)
662-663: try-except-pass detected, consider logging the exception
(S110)
662-662: Do not catch blind exception: Exception
(BLE001)
694-697: Avoid specifying long messages outside the exception class
(TRY003)
707-710: Avoid specifying long messages outside the exception class
(TRY003)
717-720: Avoid specifying long messages outside the exception class
(TRY003)
724-724: Avoid specifying long messages outside the exception class
(TRY003)
web/pgadmin/llm/__init__.py
365-365: Do not catch blind exception: Exception
(BLE001)
404-404: Do not catch blind exception: Exception
(BLE001)
430-430: Do not catch blind exception: Exception
(BLE001)
469-469: Do not catch blind exception: Exception
(BLE001)
494-494: Do not catch blind exception: Exception
(BLE001)
524-524: Do not catch blind exception: Exception
(BLE001)
549-549: Do not catch blind exception: Exception
(BLE001)
579-579: Do not catch blind exception: Exception
(BLE001)
603-605: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.
(S310)
609-609: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
609-609: Create your own exception
(TRY002)
609-609: Avoid specifying long messages outside the exception class
(TRY003)
610-610: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
610-610: Create your own exception
(TRY002)
610-610: Avoid specifying long messages outside the exception class
(TRY003)
658-660: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.
(S310)
664-664: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
664-664: Create your own exception
(TRY002)
664-664: Avoid specifying long messages outside the exception class
(TRY003)
665-665: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
665-665: Create your own exception
(TRY002)
665-665: Avoid specifying long messages outside the exception class
(TRY003)
701-701: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.
(S310)
704-706: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.
(S310)
709-709: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
709-709: Create your own exception
(TRY002)
709-709: Avoid specifying long messages outside the exception class
(TRY003)
710-710: Do not catch blind exception: Exception
(BLE001)
711-711: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
711-711: Create your own exception
(TRY002)
711-711: Avoid specifying long messages outside the exception class
(TRY003)
711-711: Use explicit conversion flag
Replace with conversion flag
(RUF010)
750-750: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.
(S310)
753-755: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.
(S310)
758-761: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
758-761: Create your own exception
(TRY002)
758-761: Avoid specifying long messages outside the exception class
(TRY003)
762-762: Do not catch blind exception: Exception
(BLE001)
763-763: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
763-763: Create your own exception
(TRY002)
763-763: Avoid specifying long messages outside the exception class
(TRY003)
763-763: Use explicit conversion flag
Replace with conversion flag
(RUF010)
845-845: Do not catch blind exception: Exception
(BLE001)
900-900: Do not catch blind exception: Exception
(BLE001)
1034-1034: Unused function argument: manager
(ARG001)
1201-1201: Do not catch blind exception: Exception
(BLE001)
1258-1258: Do not catch blind exception: Exception
(BLE001)
1341-1341: Do not catch blind exception: Exception
(BLE001)
1410-1410: Do not catch blind exception: Exception
(BLE001)
1479-1479: Do not catch blind exception: Exception
(BLE001)
1534-1534: Do not catch blind exception: Exception
(BLE001)
1605-1605: Do not catch blind exception: Exception
(BLE001)
1662-1662: Do not catch blind exception: Exception
(BLE001)
1733-1733: Do not catch blind exception: Exception
(BLE001)
1790-1790: Do not catch blind exception: Exception
(BLE001)
1873-1873: Do not catch blind exception: Exception
(BLE001)
1942-1942: Do not catch blind exception: Exception
(BLE001)
web/pgadmin/llm/client.py
93-93: Consider moving this statement to an else block
(TRY300)
96-96: Do not catch blind exception: Exception
(BLE001)
97-97: Use explicit conversion flag
Replace with conversion flag
(RUF010)
190-190: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (23)
web/pgadmin/llm/__init__.py (5)
24-29: SSL context setup looks good.The fallback pattern for certifi is appropriate - it gracefully handles environments where certifi isn't installed while still providing proper SSL certificate verification.
38-265: Preferences registration is well-structured.The LLMModule class properly registers preferences for multiple LLM providers with appropriate defaults, help strings, and dynamic model loading support via
optionsUrlandoptionsRefreshUrlproperties.
308-341: Status endpoint implementation is clean and well-structured.The endpoint properly checks LLM configuration and returns appropriate status information for the client.
787-904: Report endpoints follow a consistent pattern.The security report endpoints (sync and streaming) properly validate LLM configuration, establish database connections, and delegate to the pipeline. The use of
@pgCSRFProtect.exempton SSE streaming endpoints is appropriate since SSE doesn't support POST with CSRF tokens.
907-1032: Verify if_gather_security_configis actually unused.This helper function gathers comprehensive security configuration data, but it's unclear if it's called anywhere in the current code. The report generation endpoints use
generate_report_syncfrom the pipeline instead. Search the codebase for all usages of_gather_security_configto determine if this is dead code that should be removed, or if it's intended for future use.web/pgadmin/llm/tests/test_llm_status.py (1)
49-76: Test logic is sound and covers the key scenarios.The test properly mocks the LLM utility functions, makes the request, and validates the response structure and values.
web/pgadmin/llm/static/js/ai_tools.js (4)
33-41: Initialization pattern is clean.The guard against re-initialization and the immediate LLM status check on init are well-implemented patterns.
192-209: Good centralized LLM enablement guard.The
checkLLMEnabledfunction properly distinguishes between system-level and user-level disablement, providing appropriate disabled hints for each case.
363-424: Report display logic is well-structured.The
_showReportfunction properly validates node selection, extracts hierarchy information, builds unique panel IDs with timestamps, and renders the AIReport component with appropriate props. The close handler is correctly wired.
426-453: Panel title building logic is clear and handles all cases.The function correctly formats titles based on report category and type, with proper localization via
gettext.web/pgadmin/llm/chat.py (3)
26-42: Well-crafted default system prompt.The prompt clearly establishes the assistant's role, capabilities, and constraints (read-only mode, row limits). The guidance on query patterns is helpful for consistent LLM behavior.
99-152: Tool execution loop is well-implemented.The loop correctly:
- Increments iteration before processing (avoiding off-by-one)
- Checks stop reason to determine if more tool calls are needed
- Handles tool errors gracefully by returning error results to the LLM
- Raises RuntimeError when max iterations exceeded
The catch-all
Exceptionat line 138 is appropriate here as a safety net for unexpected errors during tool execution.
155-189: Clean convenience wrapper.The
single_queryfunction appropriately wrapschat_with_databasefor one-shot queries, discarding the history as documented.web/pgadmin/llm/client.py (5)
20-77: Well-designed abstract base class.The
LLMClientABC provides a clean contract for provider implementations with appropriate abstract methods and a sensible default forvalidate_connection. The docstrings are thorough.
96-97: Catch-all exception is acceptable here.The generic
Exceptioncatch invalidate_connectionis a reasonable safety net for connection validation, as various unexpected errors could occur during the network request.
100-106: Clean custom exception class.
LLMClientErrorproperly wrapsLLMErrorand provides a clean string representation via the parent class.
108-191: Factory function is well-implemented.The
get_llm_clientfunction:
- Uses lazy imports to avoid loading unnecessary provider modules
- Properly validates configuration before creating clients
- Returns
Nonegracefully when no provider is configured- Raises appropriate errors for misconfiguration
The use of
LLMClientErrorwrappingLLMErrorprovides good error context.
193-204: Availability check is appropriately defensive.The
is_llm_availablefunction catches bothLLMClientErrorandValueErrorto handle all failure modes gracefully.web/pgadmin/llm/tools/database.py (5)
1-38: LGTM!The imports and constants are well-organized. Template paths follow pgAdmin conventions, and the
LLM_APP_NAME_PREFIXconstant provides good observability for identifying LLM connections inpg_stat_activity.
41-47: LGTM!The custom exception class provides useful error categorization with the optional
codeparameter.
87-125: LGTM!The function correctly handles connection setup with appropriate fallback behavior. The silent pass on application_name failure is acceptable since it's non-fatal and documented.
437-450: Good use ofqtLiteralfor SQL escaping.The use of
driver.qtLiteral()for escapingschema_nameandtable_nameproperly prevents SQL injection. The static analysis warning is a false positive.
666-724: LGTM!Clean dispatcher pattern with proper argument validation. The separation of
ValueErrorfor unknown tools andDatabaseToolErrorfor invalid arguments provides clear error categorization.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
web/pgadmin/llm/tools/database.py
Outdated
| """ | ||
| # Wrap the query in a read-only transaction | ||
| # This ensures even if the query tries to modify data, it will fail | ||
| readonly_wrapper = """ |
There was a problem hiding this comment.
readonly_wrapper line no 147
There was a problem hiding this comment.
8 out of 10 Code smells are "Optional chaining should be preferred" so where ever in this file used the condition like if (res.data && res.data.success) should be replace by if (res?.data?.success)
- Add caching for LLM model options to avoid repeated API calls when switching between preference categories (fixes slow OpenAI model loading issue). - Add icon to AI Reports panel tab (fa-file-alt). - Clear options cache on successful refresh to ensure fresh data. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
# Conflicts: # web/package.json # web/yarn.lock
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@web/config.py`:
- Around line 977-987: Change the global default to opt-out for LLM features by
setting the LLM_ENABLED constant to False (instead of True) so deployments do
not unintentionally enable external AI/data sharing; leave DEFAULT_LLM_PROVIDER
as an empty string and ensure any initialization or config-loading code that
reads LLM_ENABLED/DEFAULT_LLM_PROVIDER (e.g., config consumers that enable LLM
handlers) treats False as fully disabled and update related docs/tests to
reflect the new default.
- Around line 1018-1023: The comment and the DOCKER_API_URL variable are
inconsistent: update the code so the default matches the comment by setting
DOCKER_API_URL = 'http://localhost:12434' in the Docker Model Runner
Configuration block (or if you prefer the disabled behavior, change the comment
to state the default is an empty string); locate the DOCKER_API_URL symbol and
either set its value to 'http://localhost:12434' or adjust the surrounding
comment to reflect that an empty string disables the runner.
- Enforce TLS 1.2 minimum on all SSL contexts to resolve SonarQube weak SSL/TLS vulnerability across all provider files - Remove unused manager parameter from _generate_security_report_llm - Remove unused readonly_wrapper variable in database.py - Replace raise Exception with specific exception types (ValueError, ConnectionError) in model-fetching functions - Use optional chaining in ai_tools.js (res?.data?.success) - Fix all line length violations (>79 chars) in chat.py, __init__.py, test_llm_status.py, and database.py - Fix DOCKER_API_URL comment to say "Typical value" instead of "Default value" since the actual default is empty string Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@web/pgadmin/llm/__init__.py`:
- Around line 1044-1145: Remove the dead legacy function
_generate_security_report_llm from web/pgadmin/llm/__init__.py (it's never
called; reporting now uses
generate_report_sync/ReportPipeline/get_sections_for_scope), and also remove any
imports only used by it (e.g., Message import or other imports added solely for
this function) to avoid unused-code warnings; ensure no other code references
_generate_security_report_llm before deleting.
In `@web/pgadmin/llm/static/js/ai_tools.js`:
- Around line 363-425: The _showReport function calls
pgBrowser.getDockerHandler(...) and immediately invokes handler.focus(), which
can throw if getDockerHandler returns undefined; update _showReport to guard
against a missing handler by checking the result of pgBrowser.getDockerHandler
(the handler variable) before calling handler.focus() and
handler.docker.openTab, and if missing, report an error via
pgBrowser.report_error (or return early); ensure references are to the handler
variable and the methods handler.focus and handler.docker.openTab so the fix is
localized to the handler usage in _showReport.
- Around line 192-209: The checkLLMEnabled function can show a misleading
disabled message because this.llmSystemEnabled and this.llmEnabled default to
false until the async status finishes; update checkLLMEnabled to avoid setting
data.data_disabled until the status is known by either (a) checking a
new/existing flag like this.llmStatusLoaded (only set after the async status
call completes) and returning false without setting data.data_disabled while
!this.llmStatusLoaded, or (b) only treating the properties as authoritative when
they are strict booleans (e.g. check for typeof this.llmSystemEnabled ===
'boolean' and typeof this.llmEnabled === 'boolean' before assigning messages).
Locate and modify checkLLMEnabled to implement one of these guards using the
existing symbols llmSystemEnabled, llmEnabled (or add llmStatusLoaded) so the
user-facing "disabled" reason is shown only after status is loaded.
🧹 Nitpick comments (10)
web/pgadmin/llm/providers/anthropic.py (2)
117-126: Preserve exception context withraise ... from.The blind
except Exceptioncatch loses the original traceback context. Use exception chaining to preserve debugging information.♻️ Proposed fix
try: response_data = self._make_request(payload) return self._parse_response(response_data) except LLMClientError: raise except Exception as e: - raise LLMClientError(LLMError( - message=f"Request failed: {str(e)}", + raise LLMClientError(LLMError( + message=f"Request failed: {e!s}", provider=self.provider_name - )) + )) from e
218-229: Add exception chaining in HTTP error handlers.The exception handlers should use
from eorfrom Noneto preserve or explicitly suppress the exception chain for better debugging.♻️ Proposed fix
raise LLMClientError(LLMError( message=error_msg, code=str(e.code), provider=self.provider_name, retryable=e.code in (429, 500, 502, 503, 504) - )) + )) from e except urllib.error.URLError as e: raise LLMClientError(LLMError( message=f"Connection error: {e.reason}", provider=self.provider_name, retryable=True - )) + )) from eweb/pgadmin/llm/providers/docker.py (2)
128-134: Preserve exception context withraise ... from.Same pattern as other providers - add exception chaining for better debugging.
♻️ Proposed fix
except Exception as e: - raise LLMClientError(LLMError( - message=f"Request failed: {str(e)}", + raise LLMClientError(LLMError( + message=f"Request failed: {e!s}", provider=self.provider_name - )) + )) from e
230-250: Add exception chaining in HTTP error handlers.♻️ Proposed fix
raise LLMClientError(LLMError( message=error_msg, code=str(e.code), provider=self.provider_name, retryable=e.code in (429, 500, 502, 503, 504) - )) + )) from e except urllib.error.URLError as e: raise LLMClientError(LLMError( message=f"Connection error: {e.reason}. " f"Is Docker Model Runner running at {self._api_url}?", provider=self.provider_name, retryable=True - )) + )) from e except socket.timeout: raise LLMClientError(LLMError( message="Request timed out. Local models can be slow - " "try a smaller model or wait for the response.", code='timeout', provider=self.provider_name, retryable=True - )) + )) from Noneweb/pgadmin/llm/providers/openai.py (2)
125-131: Preserve exception context withraise ... from.Consistent with other providers - add exception chaining.
♻️ Proposed fix
except Exception as e: - raise LLMClientError(LLMError( - message=f"Request failed: {str(e)}", + raise LLMClientError(LLMError( + message=f"Request failed: {e!s}", provider=self.provider_name - )) + )) from e
224-243: Add exception chaining in HTTP error handlers.♻️ Proposed fix
raise LLMClientError(LLMError( message=error_msg, code=str(e.code), provider=self.provider_name, retryable=e.code in (429, 500, 502, 503, 504) - )) + )) from e except urllib.error.URLError as e: raise LLMClientError(LLMError( message=f"Connection error: {e.reason}", provider=self.provider_name, retryable=True - )) + )) from e except socket.timeout: raise LLMClientError(LLMError( message="Request timed out. The request may be too large " "or the server is slow to respond.", code='timeout', provider=self.provider_name, retryable=True - )) + )) from Noneweb/pgadmin/llm/tools/database.py (1)
234-257: Verify connection ID collision handling.The connection ID is generated using
secrets.choice(range(1, 9999999))which could theoretically collide with an existing connection. Consider usingsecrets.token_hex()for guaranteed uniqueness.♻️ Optional improvement
- conn_id = f"llm_{secrets.choice(range(1, 9999999))}" + conn_id = f"llm_{secrets.token_hex(8)}"web/pgadmin/llm/tests/test_llm_status.py (2)
19-43: Add test scenario for Docker provider.The scenarios cover Anthropic, OpenAI, and Ollama but are missing Docker Model Runner, which is also a supported provider.
♻️ Add Docker test scenario
('LLM Status - Ollama Enabled', dict( url='/llm/status', provider_enabled=True, expected_enabled=True, provider_name='ollama' )), + ('LLM Status - Docker Enabled', dict( + url='/llm/status', + provider_enabled=True, + expected_enabled=True, + provider_name='docker' + )), ]
10-14: Remove unused imports.
MagicMockandmock_openare imported but not used in this test file.♻️ Clean up imports
import json -from unittest.mock import patch, MagicMock, mock_open +from unittest.mock import patch from pgadmin.utils.route import BaseTestGenerator -from regression.python_test_utils import test_utils as utilsweb/pgadmin/llm/static/js/ai_tools.js (1)
165-181: Avoid stale LLM flags if status is rechecked and API returns non‑success.♻️ Suggested tweak
checkLLMStatus: function() { const api = getApiInstance(); api.get(url_for('llm.status')) .then((res) => { if (res?.data?.success) { this.llmEnabled = res.data.data?.enabled || false; this.llmSystemEnabled = res.data.data?.system_enabled || false; + } else { + this.llmEnabled = false; + this.llmSystemEnabled = false; } this.llmStatusChecked = true; })
- Remove dead code: _generate_security_report_llm function that is never called (reporting uses pipeline-based generate_report_sync instead) - Guard checkLLMEnabled against showing misleading "disabled" message before async status check completes - Add null check for docker handler in _showReport to prevent TypeError if getDockerHandler returns undefined Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@web/pgadmin/llm/__init__.py`:
- Around line 917-1041: Remove the unused dead function _gather_security_config:
delete its entire definition (the function name _gather_security_config and its
body that accepts conn and manager and returns security_info) since it's not
referenced anywhere and report generation uses generate_report_sync from the
reports module to collect data; ensure no other code imports or calls
_gather_security_config remain (remove any related references to conn/manager
usage for this function if present) and run tests to confirm nothing breaks.
- Around line 700-718: The URL passed into the Ollama fetch code (and likewise
into _fetch_docker_models) must have its scheme validated before calling
urllib.request.urlopen to prevent SSRF/file:// attacks; in the function that
builds api_url (the Ollama model loader, e.g., _fetch_ollama_models) parse
api_url with urllib.parse.urlparse and only allow "http" or "https" schemes
(reject empty, "file", "ftp", etc.), raising a ConnectionError/ValueError with a
clear message if the scheme is invalid; apply the same validation logic to the
_fetch_docker_models function, then continue to normalize the URL with
rstrip('/') and call urlopen only after the scheme check.
🧹 Nitpick comments (4)
web/pgadmin/llm/__init__.py (1)
610-613: Chain exceptions for better debugging context.When re-raising exceptions, use
raise ... from errto preserve the original traceback. This applies to similar patterns in_fetch_openai_models,_fetch_ollama_models, and_fetch_docker_models.♻️ Proposed fix for exception chaining
except urllib.error.HTTPError as e: if e.code == 401: - raise ValueError('Invalid API key') - raise ConnectionError(f'API error: {e.code}') + raise ValueError('Invalid API key') from e + raise ConnectionError(f'API error: {e.code}') from eweb/pgadmin/llm/static/js/ai_tools.js (3)
176-180: Consider logging errors from the LLM status check.Silent failure in the catch block may hinder debugging when the LLM status endpoint fails unexpectedly. Adding a console warning could help identify configuration or network issues.
♻️ Suggested improvement
.catch(() => { + console.warn('Failed to check LLM status'); this.llmEnabled = false; this.llmSystemEnabled = false; this.llmStatusChecked = true; });
230-270: Consider extracting shared validation logic.The three
*_report_enabledfunctions share a common validation pattern (LLM check → node/item check → hierarchy check → connection check → type check). A helper function could reduce duplication while maintaining readability.This is a minor refactor that can be deferred.
465-474: Add defensive null checks forinfo.databaseandinfo.schemaaccess.While the
*_report_enabledfunctions validate database/schema connectivity,_buildPanelTitledirectly accessesinfo.database.labelandinfo.schema.labelwithout null checks. If this function is ever called with unexpected state (e.g., during a race condition where the tree changes), it could throw a TypeError.🛡️ Suggested defensive fix
if (reportType === 'server') { return info.server.label + ' ' + categoryLabel; } else if (reportType === 'database') { - return info.database.label + ' ' + gettext('on') + ' ' + + return (info.database?.label || '') + ' ' + gettext('on') + ' ' + info.server.label + ' ' + categoryLabel; } else if (reportType === 'schema') { - return info.schema.label + ' ' + gettext('in') + ' ' + - info.database.label + ' ' + gettext('on') + ' ' + + return (info.schema?.label || '') + ' ' + gettext('in') + ' ' + + (info.database?.label || '') + ' ' + gettext('on') + ' ' + info.server.label + ' ' + categoryLabel; }
- Fix E501 line-too-long violations across 8 files by wrapping long lines in SQL strings, error messages, docstrings, LLM prompt strings, and test code - Fix E122 continuation line indentation in database.py Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Move 'or' operator to end of line in database.py (W503) - Add dompurify to yarn.lock to fix Yarn hardened mode CI Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Update down_revision to chain after 018e16dad6aa (BIGINT id column) instead of the shared parent efbbe5d5862f, resolving the multiple heads error. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@web/pgadmin/llm/prompts/explain.py`:
- Around line 55-80: The example JSON in the Response Format block in explain.py
contains an invalid multiline string for the "summary" key which makes the
example unparsable; update the "summary" value so it is a single-line string (or
escape the newline) to produce valid JSON, ensuring the "summary" key in the
Response Format example remains a single JSON string and the overall JSON object
stays syntactically valid.
In `@web/pgadmin/llm/providers/docker.py`:
- Around line 16-23: Validate the Docker API URL's scheme before calling
urllib.request.urlopen: parse the user-configured URL (the variable that holds
the Docker API URL, e.g., docker_api_url) with urllib.parse.urlparse and ensure
parsed.scheme is either "http" or "https"; if not, log or raise an error and do
not call urllib.request.urlopen. Add this check immediately before any use of
urllib.request.urlopen (refer to urllib.request.urlopen in docker.py) so
non-http(s) schemes like file: or ftp: are rejected.
In `@web/pgadmin/llm/providers/openai.py`:
- Around line 294-302: The mapping for OpenAI finish reasons incorrectly assigns
'content_filter' to StopReason.STOP_SEQUENCE; update the stop_reason_map in
openai.py (the variable stop_reason_map used with finish_reason and choice) to
map 'content_filter' to StopReason.ERROR instead, and make the same change in
web/pgadmin/llm/providers/docker.py so both providers consistently treat content
moderation as StopReason.ERROR rather than STOP_SEQUENCE; keep the rest of the
mapping and usage of stop_reason unchanged.
In `@web/pgadmin/llm/tools/database.py`:
- Around line 350-381: The code currently swallows failures from
conn.execute_dict for tables_sql and views_sql and returns empty lists; update
the logic in the block around conn.execute_dict (for both tables_sql and
views_sql) to detect failed queries (status is falsy or views_res/tables_res is
missing) and raise a DatabaseToolError (or return an object with an error field)
instead of returning an empty list; specifically modify the handling after the
conn.execute_dict calls so that when status is False or the returned dict lacks
'rows' you raise DatabaseToolError with a descriptive message mentioning the
query type (e.g., "Failed to fetch tables for schema_oid X" or "Failed to fetch
views for schema_oid X") and include any available error details from the result
to aid debugging.
- Around line 587-649: The get_table_info implementation swallows failures from
conn.execute_dict for columns_sql, constraints_sql and indexes_sql and returns
empty lists, which masks query errors; update get_table_info so after each
conn.execute_dict call (for columns_sql, constraints_sql, indexes_sql) you check
the returned status and if false raise an informative exception (e.g.,
RuntimeError) or return an error indicator that includes the failed SQL
identifier (columns_sql/constraints_sql/indexes_sql) and any error details from
the conn response (cols_res/cons_res/idx_res) instead of building empty
columns/constraints/indexes arrays; apply this pattern consistently for the
columns, constraints and indexes blocks in get_table_info so callers can
distinguish "no rows" from a query failure.
- Around line 253-258: Strip any trailing semicolons from query, then enforce
the server-side cap by always wrapping SELECT queries as a subquery and
appending the outer LIMIT; specifically, replace the current conditional that
checks query_upper and 'LIMIT' with logic that does: trim trailing ';', compute
query_upper = query.strip().upper(), and if query_upper.startswith('SELECT') set
query = f"SELECT * FROM ({query}) AS llm_subquery LIMIT {max_rows + 1}" so
existing LIMITs inside the original query cannot bypass the cap (use the
existing variables query, query_upper, and max_rows and the alias llm_subquery).
🧹 Nitpick comments (6)
web/migrations/versions/add_tools_ai_perm_.py (1)
44-49: Normalize permission tokens before appending.If any existing
permissionsvalues include whitespace (e.g.,"tool_x, tool_y"), Line 45 will keep the spaces, so'tools_ai'might be appended even though a space-prefixed token exists, and you’ll persist inconsistent formatting. Consider trimming and dropping empty tokens before the membership check, then re-joining.♻️ Proposed normalization
- perms_list = permissions.split(',') - if 'tools_ai' not in perms_list: - perms_list.append('tools_ai') - new_permissions = ','.join(perms_list) + perms_list = [p.strip() for p in permissions.split(',') if p.strip()] + if 'tools_ai' not in perms_list: + perms_list.append('tools_ai') + new_permissions = ','.join(perms_list)web/pgadmin/llm/reports/queries.py (1)
945-950: Unusedcontextparameter.The
contextparameter is declared but never used in the function body. If it's reserved for future scope filtering, consider documenting this or adding a TODO. Otherwise, remove it to avoid confusion.Option 1: Remove if not needed
def execute_query( conn, query_id: str, - context: dict, params: Optional[list] = None ) -> dict[str, Any]: """Execute a registered query and return results. Args: conn: Database connection. query_id: The query identifier. - context: Execution context (for scope filtering). params: Optional query parameters.Option 2: Document as reserved for future use
def execute_query( conn, query_id: str, context: dict, params: Optional[list] = None ) -> dict[str, Any]: """Execute a registered query and return results. Args: conn: Database connection. query_id: The query identifier. - context: Execution context (for scope filtering). + context: Execution context (reserved for future scope filtering). params: Optional query parameters.web/pgadmin/llm/providers/openai.py (2)
125-131: Preserve exception chain for better debugging.The bare
except Exceptionloses the original traceback. Usefrom eto chain the exception, which aids debugging by preserving the original cause.♻️ Proposed fix
try: response_data = self._make_request(payload) return self._parse_response(response_data) except LLMClientError: raise except Exception as e: - raise LLMClientError(LLMError( + raise LLMClientError(LLMError( - message=f"Request failed: {str(e)}", + message=f"Request failed: {e!r}", provider=self.provider_name - )) + )) from e
216-243: Add exception chaining to preserve traceback context.All three exception handlers (
HTTPError,URLError,socket.timeout) should usefromto preserve the original exception context for debugging.♻️ Proposed fix
except urllib.error.HTTPError as e: error_body = e.read().decode('utf-8') try: error_data = json.loads(error_body) error_msg = error_data.get('error', {}).get('message', str(e)) except json.JSONDecodeError: error_msg = error_body or str(e) raise LLMClientError(LLMError( message=error_msg, code=str(e.code), provider=self.provider_name, retryable=e.code in (429, 500, 502, 503, 504) - )) + )) from e except urllib.error.URLError as e: raise LLMClientError(LLMError( message=f"Connection error: {e.reason}", provider=self.provider_name, retryable=True - )) - except socket.timeout: + )) from e + except socket.timeout as e: raise LLMClientError(LLMError( message="Request timed out. The request may be too large " "or the server is slow to respond.", code='timeout', provider=self.provider_name, retryable=True - )) + )) from eweb/pgadmin/llm/providers/docker.py (2)
128-137: Add exception chaining to preserve traceback context across all exception handlers.Exception chaining with
raise ... from epreserves the original exception traceback (PEP 3134), improving debuggability when tracing root causes. This applies to lines 128-137 and 225-253:
- All
raise LLMClientError(...)statements should usefrom eexcept socket.timeout:should bind the exception (as e) to enable chainingSuggested fix (exception chaining)
- except Exception as e: - raise LLMClientError(LLMError( - message=f"Request failed: {str(e)}", - provider=self.provider_name - )) + except Exception as e: + raise LLMClientError(LLMError( + message=f"Request failed: {e!s}", + provider=self.provider_name + )) from e- except urllib.error.HTTPError as e: + except urllib.error.HTTPError as e: ... raise LLMClientError(LLMError( message=error_msg, code=str(e.code), provider=self.provider_name, retryable=e.code in (429, 500, 502, 503, 504) - )) + )) from e- except urllib.error.URLError as e: + except urllib.error.URLError as e: raise LLMClientError(LLMError( message=f"Connection error: {e.reason}. " f"Is Docker Model Runner running at {self._api_url}?", provider=self.provider_name, retryable=True - )) + )) from e- except socket.timeout: - raise LLMClientError(LLMError( + except socket.timeout as e: + raise LLMClientError(LLMError( message="Request timed out. Local models can be slow - " "try a smaller model or wait for the response.", code='timeout', provider=self.provider_name, retryable=True - )) + )) from e
80-125: Pass through additional OpenAI-compatible API parameters instead of silently ignoring them.The
**kwargsparameter is accepted but unused. Since Docker Model Runner provides an OpenAI-compatible API, it supports additional parameters (e.g.,top_p,frequency_penalty,presence_penalty) that should be forwarded to the API. Silently ignoring them contradicts the documented intent ("Additional provider-specific parameters") and limits caller flexibility.⚙️ Suggested fix
payload = { 'model': self._model, 'messages': converted_messages, 'max_completion_tokens': max_tokens, 'temperature': temperature } + if kwargs: + payload.update(kwargs) + if tools: payload['tools'] = self._convert_tools(tools) payload['tool_choice'] = 'auto'
This is a series of 4 commits that add:
Support is included for use with Anthropic and OpenAI in the cloud (for best results), or with Ollama or Docker Model Runner on local infrastructure (including the same machine), with models such as qwen3-coder or gemma.
Summary by CodeRabbit
New Features
Documentation