Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 27 additions & 21 deletions docs/configuration/datasources.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,30 +42,36 @@ password: ${DB_PASSWORD}

SLayer uses [sqlglot](https://github.com/tobymao/sqlglot) for dialect-aware SQL generation. Databases are supported at two tiers:

### First-class support
### Database Drivers

#### First-class support

These databases are verified by integration tests and runnable Docker examples. Regressions are caught in CI.

| Type | Install Extra | Connection Driver | Example |
|------|---------------|-------------------|---------|
| `postgres` / `postgresql` | `pip install motley-slayer[postgres]` | `postgresql://` | `postgresql://user:pass@localhost:5432/db` |
| `mysql` / `mariadb` | `pip install motley-slayer[mysql]` | `mysql+pymysql://` | `mysql+pymysql://user:pass@localhost:3306/db` |
| `clickhouse` | `pip install motley-slayer[clickhouse]` | `clickhouse+http://` | `clickhouse+http://user:pass@localhost:8123/db` |
| `sqlite` | (built-in) | `sqlite:///` | `sqlite:///path/to/db.sqlite` |
| `duckdb` | `pip install motley-slayer[duckdb]` | `duckdb:///` | `duckdb:///path/to/db.duckdb` |

### Additional support

These databases have SQL generation covered by unit tests, but are not verified against live instances yet.

| Type | Notes |
|------|-------|
| `snowflake` | Analytical/cloud warehouse; no foreign keys (like ClickHouse), so auto-ingestion won't discover joins |
| `bigquery` | Analytical/cloud warehouse; no foreign keys, same caveat as Snowflake |
| `redshift` | Postgres-based cloud warehouse; FKs are informational only (not enforced) |
| `trino` / `presto` / `athena` | Federated query engines; no FKs, schema depends on the underlying connector |
| `databricks` / `spark` | Spark SQL-based; no FKs |
| `oracle` / `mssql` / `sqlserver` / `tsql` | Broadly compatible with Postgres feature set |
| Type | Install Extra | Connection String |
|------|---------------|-------------------|
| `sqlite` | (built-in, no extra needed) | `sqlite:///path/to/db.sqlite` |
| `postgres` / `postgresql` | `motley-slayer[postgres]` | `postgresql://user:pass@localhost:5432/db` |
| `mysql` / `mariadb` | `motley-slayer[mysql]` | `mysql+pymysql://user:pass@localhost:3306/db` |
| `clickhouse` | `motley-slayer[clickhouse]` | `clickhouse+http://user:pass@localhost:8123/db` |
| `duckdb` | `motley-slayer[duckdb]` | `duckdb:///path/to/db.duckdb` |

#### Additional support

SQL generation is covered by unit tests, but not verified against live instances. Install the appropriate SQLAlchemy driver manually.

| Type | SQLAlchemy Driver | Install |
|------|-------------------|---------|
| `snowflake` | `snowflake-sqlalchemy` | `pip install snowflake-sqlalchemy` |
| `bigquery` | `sqlalchemy-bigquery` | `pip install sqlalchemy-bigquery` |
| `redshift` | `sqlalchemy-redshift` + `redshift_connector` | `pip install sqlalchemy-redshift redshift-connector` |
| `trino` / `presto` / `athena` | `trino` or `PyAthena` | `pip install trino` or `pip install PyAthena` |
| `databricks` / `spark` | `databricks-sql-connector` | `pip install databricks-sql-connector` |
| `oracle` | `oracledb` | `pip install oracledb` |
| `mssql` / `sqlserver` / `tsql` | `pyodbc` or `pymssql` | `pip install pyodbc` or `pip install pymssql` |

!!! note
Snowflake, BigQuery, ClickHouse, and similar analytical warehouses typically don't have foreign keys, so auto-ingestion won't discover joins. Define joins manually in your model YAML.

!!! tip
If your database isn't listed but is supported by sqlglot, it may already work — SLayer falls back to Postgres-style SQL by default. Try it and [open an issue](https://github.com/MotleyAI/slayer/issues) if you hit a problem.
Expand Down
33 changes: 33 additions & 0 deletions docs/configuration/storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,26 @@ from slayer.storage.sqlite_storage import SQLiteStorage
storage = SQLiteStorage(db_path="./slayer.db")
```

## Storage Resolution

The `resolve_storage()` factory creates a backend from a path or URI:

```python
from slayer.storage.base import resolve_storage

storage = resolve_storage("./slayer_data") # YAMLStorage (directory)
storage = resolve_storage("slayer.db") # SQLiteStorage (.db extension)
storage = resolve_storage("sqlite:///slayer.db") # SQLiteStorage (explicit scheme)
storage = resolve_storage("yaml://./data") # YAMLStorage (explicit scheme)
```

The CLI uses this via the `--storage` flag:

```bash
slayer serve --storage ./slayer_data # YAML
slayer serve --storage slayer.db # SQLite
```

## Custom Backends

Both backends implement the `StorageBackend` protocol. You can write your own:
Expand All @@ -55,6 +75,19 @@ class MyCustomStorage(StorageBackend):
def delete_datasource(self, name: str) -> bool: ...
```

Register it for URI-based resolution:

```python
from slayer.storage.base import register_storage, resolve_storage
from my_package import RedisStorage

register_storage("redis", lambda path: RedisStorage(url=f"redis://{path}"))

# Now works everywhere:
storage = resolve_storage("redis://localhost:6379/0")
# slayer serve --storage redis://localhost:6379/0
```

Pass any backend to the server, MCP, or client:

```python
Expand Down
128 changes: 128 additions & 0 deletions docs/getting-started/cli.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
# CLI Setup — Terminal Users

Query your database from the command line. No Python code needed — just install and go.

## Install

```bash
uv tool install motley-slayer
```

For databases other than SQLite, add the driver extra (see [full list](../configuration/datasources.md#database-drivers)):

```bash
uv tool install 'motley-slayer[postgres]'
```

## Connect a database

Create a datasource — either from a YAML file or inline:

```bash
# Inline (quick setup — use ${ENV_VAR} for secrets)
slayer datasources create-inline my_pg \
--type postgres \
--host localhost \
--database myapp \
--username analyst \
--password-stdin

# Or from a YAML file
slayer datasources create datasource.yaml
```

YAML file format:

```yaml
# datasource.yaml
name: my_pg
type: postgres
host: localhost
port: 5432
database: myapp
username: analyst
password: ${DB_PASSWORD}
```

Test the connection:

```bash
slayer datasources test my_pg
# OK — connected to 'my_pg' (postgres).
```

## Ingest models

Auto-generate models from your database schema:

```bash
slayer ingest --datasource my_pg
# Ingested: orders (6 dims, 12 measures)
# Ingested: customers (4 dims, 5 measures)
# Ingested: regions (3 dims, 2 measures)
```

Optionally filter tables:

```bash
slayer ingest --datasource my_pg --schema public --include orders,customers
slayer ingest --datasource my_pg --exclude migrations,django_session
```

## Query

```bash
# Count orders by status
slayer query '{"source_model": "orders", "fields": [{"formula": "count"}], "dimensions": ["status"]}'

# From a file
slayer query @query.json

# Output as JSON (pipe-friendly)
slayer query @query.json --format json

# Preview the generated SQL without running it
slayer query @query.json --dry-run

# Show execution plan
slayer query @query.json --explain
```

## Explore models

```bash
slayer models list
slayer models show orders
slayer datasources list
```

## Verify it works

After install + ingest, this should return data:

```bash
slayer query '{"source_model": "orders", "fields": [{"formula": "count"}]}'
```

Expected output:

```
orders.count
------------
42

1 row(s)
```

If you see "Model 'orders' not found", check that `slayer ingest` ran successfully and that `--storage` points to the right location.

## Start a server (optional)

If you also want a REST API or MCP endpoint:

```bash
slayer serve # REST API at http://localhost:5143
slayer serve --storage slayer.db # Using SQLite storage
```

See the [CLI Reference](../reference/cli.md) for all commands and flags.
37 changes: 37 additions & 0 deletions docs/getting-started/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# Getting Started

SLayer is a semantic layer that sits between your database and whatever consumes the data — AI agents, apps, scripts, or dashboards. You define your data model once (or let SLayer auto-generate it), and consumers query using measures, dimensions, and filters instead of writing SQL.

## Which interface is right for you?

| I want to... | Use | Guide |
|---|---|---|
| Connect an AI agent (Claude, Cursor) to my database | **MCP Server** | [MCP Setup](mcp.md) |
| Query from the terminal or scripts | **CLI** | [CLI Setup](cli.md) |
| Build an app that queries data (any language) | **REST API** | [REST API Setup](rest-api.md) |
| Use SLayer as a Python library | **Python SDK** | [Python Setup](python.md) |

All four interfaces use the same query language and the same models — pick the one that fits your workflow. You can use multiple interfaces simultaneously (e.g., MCP for your agent + REST API for your dashboard).

## Supported Databases

SLayer works with most SQL databases. The base install includes SQLite support (no extras needed).

| Database | Install | Status |
|---|---|---|
| SQLite | included | Fully tested |
| PostgreSQL | `motley-slayer[postgres]` | Fully tested |
| MySQL / MariaDB | `motley-slayer[mysql]` | Fully tested |
| ClickHouse | `motley-slayer[clickhouse]` | Fully tested |
| DuckDB | `motley-slayer[duckdb]` | Fully tested |
| Snowflake, BigQuery, Redshift, Trino, Databricks, MS SQL, Oracle | Covered by sqlglot | SQL generation tested |

## Next Steps

After setting up your interface, explore:

- [Terminology](../concepts/terminology.md) — key terms and concepts
- [Models](../concepts/models.md) — define custom dimensions and measures
- [Queries](../concepts/queries.md) — query structure and parameters
- [Formulas](../concepts/formulas.md) — transforms, arithmetic, filters
- [Examples](../examples/01_dynamic/dynamic.md) — interactive notebooks
75 changes: 75 additions & 0 deletions docs/getting-started/mcp.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# MCP Setup — AI Agents

Connect your AI agent (Claude Code, Cursor, etc.) to your database through SLayer's MCP server. No Python knowledge required.

## Install

```bash
uv tool install motley-slayer
```

For databases other than SQLite, add the driver extra (see [full list](../configuration/datasources.md#database-drivers)):

```bash
uv tool install 'motley-slayer[postgres]'
```

## Connect to your agent

### Claude Code (stdio — recommended)

```bash
claude mcp add slayer -- slayer mcp --storage ./slayer_data
```

If SLayer is in a virtualenv, use the full path to the executable:

```bash
claude mcp add slayer -- $(which slayer) mcp --storage /absolute/path/to/slayer_data
```

### Remote agents (HTTP/SSE)

Start the server, then point your agent at the SSE endpoint:

```bash
slayer serve --storage ./slayer_data

# In another terminal / agent config:
claude mcp add slayer-remote --transport sse --url http://localhost:5143/mcp/sse
```

## Connect a database

Once the agent is connected, it handles everything conversationally. A typical exchange:

> **You:** Connect to my Postgres database at localhost, database "myapp", user "analyst"
>
> **Agent:** *calls `create_datasource` → auto-ingests models → calls `datasource_summary`*
>
> "Connected! I found 4 tables: orders (12 dims, 8 measures), customers (5 dims, 3 measures), ..."
>
> **You:** How many orders per status?
>
> **Agent:** *calls `query(source_model="orders", fields=[{"formula": "count"}], dimensions=["status"])`*

The agent uses these MCP tools in order:

1. `create_datasource` — connect to DB (auto-ingests models by default)
2. `datasource_summary` — discover available models and their schemas
3. `inspect_model` — see dimensions, measures, and sample data for a model
4. `query` — run queries

See the [MCP Reference](../reference/mcp.md) for the full tools list.

## Verify it works

Ask your agent:

> "List the available SLayer models"

The agent should call `datasource_summary` and return a list of your tables/models. If it says "no models found", check that:

1. The `--storage` path is correct
2. You've connected a datasource (or the agent has via `create_datasource`)
3. Models were ingested (auto-ingest runs by default with `create_datasource`)
Loading
Loading