MCP server improvements: docs, error handling, create_model consolidation, edit_model redesign#27
Conversation
…handling for invalid datasources
Docs:
- MCP getting-started now leads with uvx (no install needed) and JSON config
- Replaced conversational DB setup with the recommended path: drop a YAML
datasource config with ${ENV_VAR} references into the storage folder
- Noted that datasource configs are hot-reloaded (no restart needed)
- Removed SSE details from getting-started (link to reference instead)
- MCP reference now has a Quick Start section with uvx examples
- Moved datasource_summary and ingest_datasource_models to Datasource
Management section in reference
Error handling:
- resolve_env_vars() now detects unresolved ${VAR} references and raises
a clear ValueError listing the missing variables
- yaml_storage.get_datasource() catches YAML parse errors and Pydantic
validation errors, re-raises as descriptive ValueErrors
- MCP tools (list_datasources, datasource_summary, describe_datasource,
list_tables) catch per-datasource errors so one bad config doesn't
break the whole list — errors are reported inline
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The create_model tool now accepts an optional `query` parameter (a SLayer query dict). When provided, it delegates to engine.create_model_from_query() and auto-introspects dimensions and measures from the query result. Mutually exclusive with sql_table/sql/dimensions/measures — clear error if both are supplied. This reduces the MCP tool surface from two tools to one without losing any functionality. The engine-level create_model_from_query() method is unchanged. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…verage Replace the limited add-only edit_model with a comprehensive version that supports creating, updating, and deleting all model entities (dimensions, measures, aggregations, joins, filters) in a single call. Key changes: - Upsert semantics: dimensions/measures/aggregations/joins matched by name (or target_model for joins); existing entities get partial updates, new ones are created - Structured remove dict keyed by entity type replaces ambiguous flat list - Filter management via add_filters/remove_filters params - New scalar params: sql_table, sql, hidden - Full model re-validation before save catches cross-field errors - 30 tests (rewrote 9, added 21) covering upsert, partial update, aggregations, joins, filters, typed remove, and validation Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
📝 WalkthroughWalkthroughConsolidates query-based model creation into create_model(query=...), removes create_model_from_query, refactors edit_model for typed upsert/remove and filter management, enforces datasource env-var resolution with errors, hardens YAML parsing, updates docs/examples, and expands tests to cover these behaviors. Changes
Sequence DiagramsequenceDiagram
actor User
participant MCP as MCP Server
participant Engine as Query Engine
participant Storage as Model Storage
participant YAML as YAML Loader/Config
User->>MCP: create_model(query={...})
MCP->>MCP: validate inputs (reject mixed params)
MCP->>Engine: create_model_from_query(query)
Engine->>Engine: execute query & infer schema
Engine-->>MCP: inferred dimensions/measures
MCP->>Storage: persist model definition
Storage->>YAML: resolve datasource config (${VAR})
YAML-->>Storage: resolved values or raise ValueError
Storage-->>MCP: persisted confirmation
MCP-->>User: return dimension/measure names or error
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
slayer/mcp/server.py (1)
300-345:⚠️ Potential issue | 🟠 MajorReject
data_sourcein query mode instead of silently dropping it.
create_model(..., query=...)still acceptsdata_source, but this branch neither rejects it nor forwards it toengine.create_model_from_query(...). That makes part of the public API a no-op and can break query payloads that rely on an inlinesource_modeland expect the outerdata_sourceto be honored.Suggested fix
if query is not None: table_params = _build_dict( - sql_table=sql_table, sql=sql, dimensions=dimensions, measures=measures, + sql_table=sql_table, + sql=sql, + data_source=data_source, + dimensions=dimensions, + measures=measures, ) if table_params: return ( f"Error: 'query' cannot be combined with {', '.join(table_params.keys())}. " "Use 'query' alone to create from a query, or provide table details without 'query'." )Based on learnings:
SlayerQuery.source_modelaccepts a model name, inlineSlayerModel, orModelExtension;create_model_from_query()saves a query as a permanent model.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@slayer/mcp/server.py` around lines 300 - 345, The create_model function currently ignores data_source when query is provided; update create_model to explicitly reject a non-None data_source in the branch handling query: after entering if query is not None and before parsing the query (before calling SlayerQuery.model_validate or engine.create_model_from_query), check if data_source is not None and return or raise a clear error message (e.g., "Error: 'data_source' cannot be used when creating a model from 'query'") so callers are not silently misled; keep references to SlayerQuery.model_validate and engine.create_model_from_query and do not forward data_source to create_model_from_query.
🧹 Nitpick comments (1)
slayer/mcp/server.py (1)
517-533: Use keyword args for the_upsert_entity(...)calls.These new call sites are hard to audit because every argument is positional, and they also miss the repo’s multi-parameter call convention.
Suggested refactor
for spec in dimensions or []: - err = _upsert_entity(model.dimensions, spec, Dimension, "name", changes, "dimension") + err = _upsert_entity( + entity_list=model.dimensions, + spec=spec, + entity_cls=Dimension, + id_field="name", + changes=changes, + label="dimension", + ) if err: return err for spec in measures or []: - err = _upsert_entity(model.measures, spec, Measure, "name", changes, "measure") + err = _upsert_entity( + entity_list=model.measures, + spec=spec, + entity_cls=Measure, + id_field="name", + changes=changes, + label="measure", + ) if err: return errAs per coding guidelines:
**/*.py: Use keyword arguments for functions with more than 1 parameter.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@slayer/mcp/server.py` around lines 517 - 533, The four calls to _upsert_entity use only positional args which violates the repo convention and makes reviews error-prone; update each call in the loops (the ones using model.dimensions/model.measures/model.aggregations/model.joins and classes Dimension/Measure/Aggregation/ModelJoin) to pass keyword arguments instead (e.g. collection=..., spec=..., entity_cls=... or entity_class=..., key_field=... or key=..., changes=..., kind=...), matching the _upsert_entity parameter names so each argument is explicit and follows the multi-parameter call convention.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/reference/mcp.md`:
- Line 76: The docs line for datasource_summary incorrectly promises JSON even
when no datasources/models exist; update either the tool or the doc: either
modify the datasource_summary implementation to always return a JSON empty
structure (e.g., a predictable empty object/array and consistent schema) or
change the documentation text for `datasource_summary` to state that it returns
JSON when datasources/models exist but returns a plain-text empty-state message
when none are present; reference `datasource_summary` in your change so
consumers know which behavior changed.
In `@slayer/mcp/server.py`:
- Around line 469-474: Ensure sql_table and sql remain mutually exclusive: in
the edit handling around model, first check if both sql_table and sql are
provided in the request and reject with a 400/error response (do not apply
changes). If only sql_table is provided, set model.sql_table = sql_table, clear
model.sql (set to None/""), and append both the "set sql_table ..." and a
"cleared sql" change entry; if only sql is provided, set model.sql = sql, clear
model.sql_table, and append both the "set sql ..." and a "cleared sql_table"
change entry. Use the existing variables model, sql_table, sql, and changes to
implement this logic.
---
Outside diff comments:
In `@slayer/mcp/server.py`:
- Around line 300-345: The create_model function currently ignores data_source
when query is provided; update create_model to explicitly reject a non-None
data_source in the branch handling query: after entering if query is not None
and before parsing the query (before calling SlayerQuery.model_validate or
engine.create_model_from_query), check if data_source is not None and return or
raise a clear error message (e.g., "Error: 'data_source' cannot be used when
creating a model from 'query'") so callers are not silently misled; keep
references to SlayerQuery.model_validate and engine.create_model_from_query and
do not forward data_source to create_model_from_query.
---
Nitpick comments:
In `@slayer/mcp/server.py`:
- Around line 517-533: The four calls to _upsert_entity use only positional args
which violates the repo convention and makes reviews error-prone; update each
call in the loops (the ones using
model.dimensions/model.measures/model.aggregations/model.joins and classes
Dimension/Measure/Aggregation/ModelJoin) to pass keyword arguments instead (e.g.
collection=..., spec=..., entity_cls=... or entity_class=..., key_field=... or
key=..., changes=..., kind=...), matching the _upsert_entity parameter names so
each argument is explicit and follows the multi-parameter call convention.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: de280bf6-01fd-41d1-9102-0ee1bedf80c4
📒 Files selected for processing (12)
docs/concepts/models.mddocs/examples/02_sql_vs_dsl/sql_vs_dsl_nb.ipynbdocs/examples/07_aggregations/aggregations.mddocs/getting-started/mcp.mddocs/index.mddocs/interfaces/mcp.mddocs/reference/mcp.mdslayer/core/models.pyslayer/mcp/server.pyslayer/storage/yaml_storage.pytests/test_mcp_server.pytests/test_storage.py
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…MotleyAI/slayer into egor/dev-1239-mcp-server-tweaks
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
slayer/mcp/server.py (1)
523-538: Switch the_upsert_entity()calls to keyword arguments.These six-argument positional calls are hard to audit and violate the repo's Python calling convention.
♻️ Suggested cleanup
- err = _upsert_entity(model.dimensions, spec, Dimension, "name", changes, "dimension") + err = _upsert_entity( + entity_list=model.dimensions, + spec=spec, + entity_cls=Dimension, + id_field="name", + changes=changes, + label="dimension", + ) if err: return err - err = _upsert_entity(model.measures, spec, Measure, "name", changes, "measure") + err = _upsert_entity( + entity_list=model.measures, + spec=spec, + entity_cls=Measure, + id_field="name", + changes=changes, + label="measure", + ) if err: return err - err = _upsert_entity(model.aggregations, spec, Aggregation, "name", changes, "aggregation") + err = _upsert_entity( + entity_list=model.aggregations, + spec=spec, + entity_cls=Aggregation, + id_field="name", + changes=changes, + label="aggregation", + ) if err: return err - err = _upsert_entity(model.joins, spec, ModelJoin, "target_model", changes, "join") + err = _upsert_entity( + entity_list=model.joins, + spec=spec, + entity_cls=ModelJoin, + id_field="target_model", + changes=changes, + label="join", + ) if err: return errAs per coding guidelines "Use keyword arguments for functions with more than 1 parameter."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@slayer/mcp/server.py` around lines 523 - 538, The calls to _upsert_entity in server.py use six positional arguments which violates the repo convention; change each call to use keyword arguments (e.g., model=<...>, spec=<...>, entity_class=<...>, key_field=<...>, changes=<...>, kind=<...>) so it's obvious what each value represents—update the four loops calling _upsert_entity (referencing model.dimensions/model.measures/model.aggregations/model.joins and the corresponding Dimension/Measure/Aggregation/ModelJoin, the "name"/"target_model" key_field, changes, and the "dimension"/"measure"/"aggregation"/"join" kind) to pass those values as keywords.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@slayer/mcp/server.py`:
- Around line 214-222: The exception handler around storage.get_datasource
currently returns str(exc) to MCP clients, which can leak sensitive input;
instead, catch exceptions from storage.get_datasource (e.g., in the try/except
blocks that call storage.get_datasource and then datasources.append) and replace
the returned error text with a sanitized category like "invalid datasource" or
"validation error"; log the full exception and stack trace server-side (using
the existing logger) for debugging, and apply the same change to the other
similar handlers that call storage.get_datasource (the blocks around lines
663-668, 678-681, 719-722) so clients only receive the safe message while full
details remain in server logs.
---
Nitpick comments:
In `@slayer/mcp/server.py`:
- Around line 523-538: The calls to _upsert_entity in server.py use six
positional arguments which violates the repo convention; change each call to use
keyword arguments (e.g., model=<...>, spec=<...>, entity_class=<...>,
key_field=<...>, changes=<...>, kind=<...>) so it's obvious what each value
represents—update the four loops calling _upsert_entity (referencing
model.dimensions/model.measures/model.aggregations/model.joins and the
corresponding Dimension/Measure/Aggregation/ModelJoin, the "name"/"target_model"
key_field, changes, and the "dimension"/"measure"/"aggregation"/"join" kind) to
pass those values as keywords.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
slayer/mcp/server.py (1)
232-239:⚠️ Potential issue | 🟡 MinorKeep
datasource_summary()'s JSON envelope stable.The new empty-workspace branch returns
"datasources": [], but a model-only workspace still omits that key because the top-level dict is built conditionally. Clients parsing this as structured JSON now have to handle two response shapes for the same tool.📦 Suggested fix
- result = {} - if datasources: - result["datasources"] = datasources - result["models"] = models - result["model_count"] = len(models) + result = { + "datasources": datasources, + "models": models, + "model_count": len(models), + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@slayer/mcp/server.py` around lines 232 - 239, The JSON envelope returned by datasource_summary() is unstable because "datasources" is only set when datasources is truthy; always include the "datasources" key (even as an empty list) so clients see a consistent shape. Modify the construction of result (the dict built from datasources, models, and model_count) to always set result["datasources"] = datasources (or [] when None/falsey), and keep result["models"] = models and result["model_count"] = len(models) as before so the response shape is stable for all workspaces.
♻️ Duplicate comments (1)
slayer/mcp/server.py (1)
214-222:⚠️ Potential issue | 🟠 MajorStop returning raw datasource load errors to MCP clients.
slayer/storage/yaml_storage.py:55-71already wraps bad datasource files with the datasource path and low-level parser/validation detail. Echoingstr(exc)here leaks that internal config/debug data throughdatasource_summary,list_datasources,describe_datasource, andlist_tables. Log the full exception server-side and return a fixed client-safe message instead.🛡️ Suggested sanitization pattern
- except (ValueError, Exception) as exc: - datasources.append({"name": name, "error": str(exc)}) + except Exception as exc: + logger.warning("Failed to load datasource %s", name, exc_info=exc) + datasources.append({"name": name, "error": "invalid datasource config"})- except (ValueError, Exception) as exc: - return f"Datasource '{name}' has an invalid config: {exc}" + except Exception as exc: + logger.warning("Failed to load datasource %s", name, exc_info=exc) + return f"Datasource '{name}' has an invalid config."Also applies to: 663-668, 678-681, 719-722
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@slayer/mcp/server.py` around lines 214 - 222, The current try/except around storage.get_datasource in server.py (used by the datasource summary/listing endpoints such as datasource_summary, list_datasources, describe_datasource, and list_tables) is returning str(exc) to clients which leaks internal parser/validation details; instead, log the full exception server-side (including stack trace) and append a fixed, client-safe error entry like {"name": name, "error": "failed to load datasource"} (or similar) to datasources; update the identical patterns in the other handlers that call storage.get_datasource so they all log the exception and return the safe message rather than the raw exception string.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@slayer/mcp/server.py`:
- Around line 332-340: The check for mixed query/table parameters wrongly treats
empty placeholders (e.g., sql="", dimensions=[], measures=[]) as present; after
creating table_params (via _build_dict) filter out empty/falsy values (empty
strings, empty lists/dicts) before deciding there is a conflict. Update the
logic around table_params in server.py so that when query is not None you ignore
keys whose values are empty (use a dict comprehension or extend _build_dict to
drop empty containers/strings) and only raise the error if any remaining keys
exist; reference variables/functions: query, table_params, and _build_dict.
---
Outside diff comments:
In `@slayer/mcp/server.py`:
- Around line 232-239: The JSON envelope returned by datasource_summary() is
unstable because "datasources" is only set when datasources is truthy; always
include the "datasources" key (even as an empty list) so clients see a
consistent shape. Modify the construction of result (the dict built from
datasources, models, and model_count) to always set result["datasources"] =
datasources (or [] when None/falsey), and keep result["models"] = models and
result["model_count"] = len(models) as before so the response shape is stable
for all workspaces.
---
Duplicate comments:
In `@slayer/mcp/server.py`:
- Around line 214-222: The current try/except around storage.get_datasource in
server.py (used by the datasource summary/listing endpoints such as
datasource_summary, list_datasources, describe_datasource, and list_tables) is
returning str(exc) to clients which leaks internal parser/validation details;
instead, log the full exception server-side (including stack trace) and append a
fixed, client-safe error entry like {"name": name, "error": "failed to load
datasource"} (or similar) to datasources; update the identical patterns in the
other handlers that call storage.get_datasource so they all log the exception
and return the safe message rather than the raw exception string.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6869cb3b-494b-4e10-b180-7be8988692ca
📒 Files selected for processing (2)
slayer/mcp/server.pytests/test_mcp_server.py
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
slayer/mcp/server.py (2)
833-835:⚠️ Potential issue | 🟡 MinorWrap
storage.get_datasource()in try/except for consistency.Same issue as
edit_datasource: if the datasource config is malformed, the exception will propagate and may expose error details.🛡️ Suggested fix
`@mcp.tool`() def ingest_datasource_models(datasource_name: str, include_tables: str = "", schema_name: str = "") -> str: ... from slayer.engine.ingestion import ingest_datasource as _ingest - ds = storage.get_datasource(datasource_name) + try: + ds = storage.get_datasource(datasource_name) + except Exception as exc: + logger.warning("Failed to load datasource '%s': %s", datasource_name, exc) + return f"Datasource '{datasource_name}' has an invalid config." if ds is None: return f"Datasource '{datasource_name}' not found."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@slayer/mcp/server.py` around lines 833 - 835, Wrap the call to storage.get_datasource(datasource_name) in a try/except like edit_datasource does: catch exceptions raised while loading/parsing the datasource config, log the internal exception details via the module logger (or process logger) and return a sanitized message such as "Datasource '<name>' not found or invalid" to the caller; ensure you still return the existing "Datasource '<datasource_name>' not found." when ds is None, but prevent uncaught exceptions from propagating by handling errors from storage.get_datasource.
782-789:⚠️ Potential issue | 🟡 MinorWrap
storage.get_datasource()in try/except for consistency.Unlike
list_datasources,describe_datasource, andlist_tables, this handler doesn't catch exceptions fromstorage.get_datasource(). If the datasource config is malformed (invalid YAML, unresolved env vars), the exception will propagate up and potentially expose raw error details.🛡️ Suggested fix
`@mcp.tool`() def edit_datasource( name: str, description: Optional[str] = None, ) -> str: """Update a datasource's metadata. Args: name: Datasource name to update. description: New description for the datasource. """ - ds = storage.get_datasource(name) + try: + ds = storage.get_datasource(name) + except Exception as exc: + logger.warning("Failed to load datasource '%s': %s", name, exc) + return f"Datasource '{name}' has an invalid config." if ds is None: return f"Datasource '{name}' not found."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@slayer/mcp/server.py` around lines 782 - 789, Wrap the call to storage.get_datasource(name) in a try/except like the other handlers (list_datasources, describe_datasource, list_tables) so malformed datasource configs don't raise raw exceptions; catch Exception around storage.get_datasource(name), log the exception internally (not returned to caller) and return a safe error message (e.g., "Datasource '<name>' not found or invalid") if an error occurs, then proceed to the existing ds is None check and the subsequent description update and storage.save_datasource(ds).
🧹 Nitpick comments (2)
tests/test_mcp_server.py (2)
79-80: Remove redundant import.
DatasourceConfigis already imported at the top of the file (line 14). This local import is unnecessary.♻️ Suggested fix
def test_includes_datasource(self, mcp_server, storage: YAMLStorage) -> None: - from slayer.core.models import DatasourceConfig storage.save_datasource(DatasourceConfig( name="mydb", type="postgres", host="localhost", description="Production DB", ))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_mcp_server.py` around lines 79 - 80, In test_includes_datasource within tests/test_mcp_server.py remove the redundant local import statement "from slayer.core.models import DatasourceConfig" (DatasourceConfig is already imported at the top of the file); simply delete that line so the test_includes_datasource function uses the existing top-level DatasourceConfig import.
186-187: Consider clarifying the assertion logic.The assertion
"Error:" not in result or "Datasource" in resultis intended to verify routing to the query path (per the comment), but the logic is subtle: "Error:" (colon immediately after "Error") won't match "Error creating model from query:" because the colon appears later in that string. A more explicit check would improve readability.♻️ Suggested clarification
# Should route to query path (fails on missing datasource), not mixed-param error - assert "Error:" not in result or "Datasource" in result + # Mixed-param error: "Error: 'query' cannot..." vs query-path error: "Error creating model..." + assert "cannot be combined" not in result # Ensure we didn't hit mixed-param check🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_mcp_server.py` around lines 186 - 187, The existing assertion uses the subtle expression `"Error:" not in result or "Datasource" in result`; update it to an explicit boolean check to make intent clear — for example replace with `assert not ("Error" in result and "Datasource" not in result)` (or, if you want to match the specific error text, use `assert not ("Error creating model from query:" in result and "Datasource" not in result")`). Change the assertion around the `result` variable in the test so it clearly asserts that either no "Error" appears or the string "Datasource" is present.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@slayer/mcp/server.py`:
- Around line 833-835: Wrap the call to storage.get_datasource(datasource_name)
in a try/except like edit_datasource does: catch exceptions raised while
loading/parsing the datasource config, log the internal exception details via
the module logger (or process logger) and return a sanitized message such as
"Datasource '<name>' not found or invalid" to the caller; ensure you still
return the existing "Datasource '<datasource_name>' not found." when ds is None,
but prevent uncaught exceptions from propagating by handling errors from
storage.get_datasource.
- Around line 782-789: Wrap the call to storage.get_datasource(name) in a
try/except like the other handlers (list_datasources, describe_datasource,
list_tables) so malformed datasource configs don't raise raw exceptions; catch
Exception around storage.get_datasource(name), log the exception internally (not
returned to caller) and return a safe error message (e.g., "Datasource '<name>'
not found or invalid") if an error occurs, then proceed to the existing ds is
None check and the subsequent description update and
storage.save_datasource(ds).
---
Nitpick comments:
In `@tests/test_mcp_server.py`:
- Around line 79-80: In test_includes_datasource within tests/test_mcp_server.py
remove the redundant local import statement "from slayer.core.models import
DatasourceConfig" (DatasourceConfig is already imported at the top of the file);
simply delete that line so the test_includes_datasource function uses the
existing top-level DatasourceConfig import.
- Around line 186-187: The existing assertion uses the subtle expression
`"Error:" not in result or "Datasource" in result`; update it to an explicit
boolean check to make intent clear — for example replace with `assert not
("Error" in result and "Datasource" not in result)` (or, if you want to match
the specific error text, use `assert not ("Error creating model from query:" in
result and "Datasource" not in result")`). Change the assertion around the
`result` variable in the test so it clearly asserts that either no "Error"
appears or the string "Datasource" is present.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c606b523-33cd-4ad0-bf66-8b23e5540045
📒 Files selected for processing (2)
slayer/mcp/server.pytests/test_mcp_server.py
Summary
A batch of MCP server improvements spanning documentation, robustness, and tool API design:
Docs: uvx install-free setup & YAML datasource config — MCP getting-started now leads with
uvx(no install needed) and JSON config. Replaced conversational DB setup with the recommended YAML datasource config approach using${ENV_VAR}references. Trimmed SSE details from getting-started (linked to reference instead). Added Quick Start section to MCP reference.Graceful error handling for invalid datasources —
resolve_env_vars()detects unresolved${VAR}references and raises a clearValueError.yaml_storage.get_datasource()catches parse/validation errors. MCP tools (list_datasources,datasource_summary, etc.) catch per-datasource errors so one bad config doesn't break the whole listing.Merge
create_model_from_queryintocreate_model— Thecreate_modeltool now accepts an optionalqueryparameter. When provided, it delegates toengine.create_model_from_query()and auto-introspects dimensions/measures from the query result. Mutually exclusive withsql_table/sql/dimensions/measures. Reduces tool surface without losing functionality.Redesign
edit_modelwith upsert semantics and full entity coverage — Replace the limited add-onlyedit_modelwith a comprehensive version supporting all model entities (dimensions, measures, aggregations, joins, filters). Named entities use upsert-by-name: if an entity exists, only provided fields are updated; if not, it's created. Structuredremovedict keyed by entity type replaces the old ambiguous flat list. 30 tests (rewrote 9, added 21).Test plan
poetry run pytest) — 467 testspoetry run ruff check slayer/ tests/)edit_modelupsert: create a model, then calledit_modelwith an existing dimension name — should update, not erroredit_modelstructured remove:remove={"measures": ["x"]}deletes the named measurecreate_modelwithqueryparam works end-to-end against a live datasourcedatasource_summary🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation
Improvements
Tests
Chores