Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file removed docs/docs/plugins/assets/lakebase-setup/step-1.png
Binary file not shown.
Binary file removed docs/docs/plugins/assets/lakebase-setup/step-2.png
Binary file not shown.
Binary file removed docs/docs/plugins/assets/lakebase-setup/step-4.png
Binary file not shown.
Binary file removed docs/docs/plugins/assets/lakebase-setup/step-5.png
Binary file not shown.
Binary file removed docs/docs/plugins/assets/lakebase-setup/step-6.png
Binary file not shown.
110 changes: 57 additions & 53 deletions docs/docs/plugins/lakebase.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@ sidebar_position: 4
# Lakebase plugin

:::info
Currently, the Lakebase plugin currently requires a one-time manual setup to connect your Databricks App with your Lakebase database. An automated setup process is planned for an upcoming future release.
This setup requires a one-time manual process to connect your Databricks App's service principal to your Lakebase database. You'll need the [Databricks CLI](https://docs.databricks.com/dev-tools/cli/install.html), [`jq`](https://jqlang.github.io/jq/), and [`psql`](https://www.postgresql.org/download/) installed locally.
An automated setup is coming soon.
:::

Provides a PostgreSQL connection pool for Databricks Lakebase Autoscaling with automatic OAuth token refresh.
Expand All @@ -18,57 +19,60 @@ Provides a PostgreSQL connection pool for Databricks Lakebase Autoscaling with a

## Setting up Lakebase

Before using the plugin, you need to connect your Databricks App's service principal to your Lakebase database.
Before using the plugin, you need to connect your Databricks App's service principal to your Lakebase database. The script below walks through the entire setup — fill in the variables at the top and run each section.

> **Note:** The Databricks CLI commands below use your **DEFAULT** profile. To use a different profile, set `export DATABRICKS_CONFIG_PROFILE=<profile-name>` before running the script.

Some values come from the Databricks UI:
- **Project ID** and **Branch ID** — from the URL when viewing your Lakebase branch: `.../projects/{project-id}/branches/{branch-id}/...`
- **PGHOST** — from the **Connect** dialog on your Lakebase branch
- **App name** — your Databricks App name (from `Compute > Apps`)

```sh
# ──────────────────────────────────────────────────
# 1. Set your variables
# ──────────────────────────────────────────────────
# Navigate to the Lakebase Branch Overview page (Projects -> Project dashboard -> Branch overview)
# Copy the Project ID and Branch ID from the branch URL: /projects/{id}/branches/{id}
PROJECT_ID=<your-project-id>
BRANCH_ID=<your-branch-id>

# Click "Connect" in the Lakebase Branch Overview page to get the details.
# Use the "Parameters only" option to get the values.
PGHOST=<your-lakebase-host>
PGDATABASE=databricks_postgres

# Your Databricks App name
APP_NAME=<your-app-name>

# ──────────────────────────────────────────────────
# 2. Look up the endpoint via CLI
# ──────────────────────────────────────────────────
# Uses the first endpoint; branches typically have one
LAKEBASE_ENDPOINT=$(databricks postgres list-endpoints "projects/${PROJECT_ID}/branches/${BRANCH_ID}" | jq -r '.[0].name')
echo "Endpoint: ${LAKEBASE_ENDPOINT}"

# ──────────────────────────────────────────────────
# 3. Get your app's service principal
# ──────────────────────────────────────────────────
SP_CLIENT_ID=$(databricks apps get "${APP_NAME}" | jq -r '.service_principal_client_id')
echo "Service principal: ${SP_CLIENT_ID}"

# ──────────────────────────────────────────────────
# 4. Grant access to the service principal via psql
# ──────────────────────────────────────────────────
export PGSSLMODE=require
export PGPASSWORD=$(databricks postgres generate-database-credential "${LAKEBASE_ENDPOINT}" | jq -r '.token')

psql -h "${PGHOST}" -d "${PGDATABASE}" -U "$(databricks current-user me | jq -r '.userName')" <<"SQL"

### 1. Find your app's service principal

Create a Databricks App from the UI (`Compute > Apps > Create App > Create a custom app`). Navigate to the **Environment** tab and note the `DATABRICKS_CLIENT_ID` value — this is the service principal that will connect to your Lakebase database.

![App environment tab](./assets/lakebase-setup/step-1.png)

### 2. Find your Project ID and Branch ID

Create a new Lakebase Postgres Autoscaling project. Navigate to your Lakebase project's branch details and switch to the **Compute** tab. Note the **Project ID** and **Branch ID** from the URL.

![Branch details](./assets/lakebase-setup/step-2.png)

### 3. Find your endpoint

Use the Databricks CLI to list endpoints for the branch. Note the `name` field from the output — this is your `LAKEBASE_ENDPOINT` value.

```bash
databricks postgres list-endpoints projects/{project-id}/branches/{branch-id}
```

Example output:

```json
[
{
"create_time": "2026-02-19T12:13:02Z",
"name": "projects/{project-id}/branches/{branch-id}/endpoints/primary"
}
]
```

### 4. Get connection parameters

Click the **Connect** button on your Lakebase branch and copy the `PGHOST` and `PGDATABASE` values for later.

![Connect dialog](./assets/lakebase-setup/step-4.png)

### 5. Grant access to the service principal

Navigate to the **SQL Editor** tab on your Lakebase branch. Run the following SQL against the `databricks_postgres` database, replacing the service principal ID in the `DECLARE` block with the `DATABRICKS_CLIENT_ID` value from step 1:

```sql
CREATE EXTENSION IF NOT EXISTS databricks_auth;

DO $$
DECLARE
sp TEXT := 'your-service-principal-id'; -- Replace with DATABRICKS_CLIENT_ID from Step 1
sp TEXT := '${SP_CLIENT_ID}';
BEGIN
-- Create service principal role
-- Create service principal role (safe to re-run)
PERFORM databricks_create_role(sp, 'SERVICE_PRINCIPAL');

-- Connection and schema access
Expand All @@ -87,15 +91,15 @@ BEGIN
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON FUNCTIONS TO %I', sp);
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON ROUTINES TO %I', sp);
END $$;
```

![SQL Editor](./assets/lakebase-setup/step-5.png)

### 6. Verify the role
SQL

Navigate to the **Roles & Databases** tab and confirm the role is visible. You may need to fully refresh the page.

![Roles & Databases tab](./assets/lakebase-setup/step-6.png)
# ──────────────────────────────────────────────────
# 5. Verify the role was created
# ──────────────────────────────────────────────────
psql -h "${PGHOST}" -d "${PGDATABASE}" -U "$(databricks current-user me | jq -r '.userName')" \
-c "SELECT rolname FROM pg_roles WHERE rolname = '${SP_CLIENT_ID}'"
```

## Basic usage

Expand Down