Schema migrations for AWS Aurora DSQL — because Atlas doesn't support it yet.
aurora is a CLI tool that brings schema-as-code migrations to AWS Aurora DSQL. It uses the same HCL config format as Atlas, handles distributed locking, generates IAM auth tokens automatically, and translates PostgreSQL index syntax to Aurora DSQL-compatible equivalents — so your local dev environment and production stay in sync.
- Atlas-compatible HCL config — familiar
env,variable, anddatablocks - IAM token generation —
data.aws_dsql_tokenfetches a signed auth token at runtime - Distributed locking — prevents concurrent migrations across multiple replicas or CI runs
- Index syntax translation — rewrites
CONCURRENTLYtoASYNCautomatically for Aurora DSQL - Idempotent-first design — each migration is safe to re-run; no drift surprises
- Multi-platform Docker image —
linux/amd64andlinux/arm64published to GHCR
Docker (recommended)
FROM ghcr.io/aws-contrib/aws-aurora:edge AS aurora
FROM scratch
WORKDIR /app
COPY --from=aurora /bin/aurora /app/auroraFrom source
go install github.com/aws-contrib/aws-aurora/cmd/aurora@latestCreate aurora.hcl in your project root:
env "aws" {
migration {
dir = "file://database/migration"
}
url = "postgres://${var.aws_dsql_username}:${urlescape(data.aws_dsql_token.this)}@${var.aws_dsql_host}/mydb"
}
data "aws_dsql_token" "this" {
username = var.aws_dsql_username
endpoint = var.aws_dsql_host
region = var.aws_region
}
variable "aws_dsql_username" {
type = string
default = "my-service"
}
variable "aws_dsql_host" {
type = string
default = getenv("DATABASE_HOST")
}
variable "aws_region" {
type = string
default = getenv("AWS_REGION")
}Use Atlas to generate migration files, then place them in your migrations directory (e.g. database/migration/).
Migrations must be idempotent — Aurora DSQL does not allow a single transaction to contain multiple DDL and DML statements, and the CLI executes each statement individually. Write migrations so they can be safely re-applied:
-- Good: idempotent
CREATE TABLE IF NOT EXISTS users (
id UUID PRIMARY KEY,
name TEXT NOT NULL
);
ALTER TABLE users ADD COLUMN IF NOT EXISTS email TEXT;Index creation — use CONCURRENTLY locally (runs outside a transaction in PostgreSQL). The CLI translates it to ASYNC when running against Aurora DSQL:
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_users_email ON users (email);
-- Becomes on Aurora DSQL:
-- CREATE INDEX ASYNC IF NOT EXISTS idx_users_email ON users (email);All SQL statements must end with
;— the CLI splits migration files on semicolons.
aurora migrate --env aws apply# Show current status
aurora migrate --env aws status
# Block until all pending migrations are applied (useful in CI/CD init containers)
aurora migrate --env aws status --wait --wait-timeout 10maurora migrate [flags] <command>
Flags:
--config Path to config file (default: file://aurora.hcl)
--env Environment name from config (required)
Commands:
apply Apply all pending migrations
status Show migration status
apply flags:
--lock-timeout How long to wait for the distributed lock (default: 25m)
status flags:
--wait Block until no pending migrations remain
--wait-timeout Maximum time to wait (default: 25m)
Dependencies are managed via Nix flakes. You can work locally with nix develop or inside a Dev Container — both use the same toolchain.
Requires Nix with flakes enabled (experimental-features = nix-command flakes).
nix developThis drops you into a shell with Go, PostgreSQL client tools, and AWS CLI available. You'll need a local PostgreSQL instance — see $AURORA_DATABASE_URL for the expected connection string.
This project ships with a Dev Container powered by the Nix devcontainer feature. It installs the same Nix environment and spins up a PostgreSQL service automatically.
# Open in VS Code with Dev Containers extension installed
code .
# Then: Reopen in ContainerOnce inside the container, a local PostgreSQL instance is available at 127.0.0.1:5432 and $AURORA_DATABASE_URL is set automatically. Enter the Nix shell to get the full toolchain:
nix develop# Run tests
go tool ginkgo -r
# Run tests with coverage
go tool ginkgo -r -coverprofile=coverprofile.outContributions are welcome. Please open an issue before submitting a pull request for significant changes.
- Fork the repo and create a feature branch
- Run
nix developto enter the development environment - Write tests for new behaviour
- Ensure
go tool ginkgo -rpasses - Open a PR — the CI pipeline will run tests automatically