You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One console for every database, across every cloud.
A web-based PostgreSQL and Redis management tool for querying multiple database instances across cloud providers simultaneously — with role-based access, full audit trails, and release verification built in.
Managing PostgreSQL across AWS, GCP, or any cloud means juggling connections, credentials, and comparing results manually. This tool gives you one UI to query them all — run the same SQL on every cloud at once, compare results side‑by‑side, and maintain a full audit trail with role‑based access control.
Compare replicas
Run the same query on AWS and GCP simultaneously. Catch divergence after migrations.
Health check fleets
One query, every instance, side‑by‑side results with timing per cloud.
Ship schema changes
Execute DDL across environments in one shot, with rollback on failure.
Audit everything
Complete execution log with role‑based permissions and password‑protected destructive ops.
◈ Features
Core
┌────────────────────────────────────┬──────────────────────────────────────────────────────────┐
│ Multi‑cloud execution │ Query all clouds simultaneously or target a specific one │
│ Dynamic configuration │ Add clouds and databases via JSON — zero code changes │
│ Async query engine │ Non‑blocking execution with progress + cancellation │
│ Multi‑statement support │ Batches separated by ';' with per‑statement results │
│ Role‑based access │ MASTER / USER / READER with granular SQL control │
│ Password‑protected ops │ DROP, TRUNCATE, DELETE, ALTER require MASTER password │
│ Query history & audit │ Full execution log with filtering and pagination │
│ Env variable substitution │ ${VAR_NAME} in config for secure credential management │
└────────────────────────────────────┴──────────────────────────────────────────────────────────┘
SQL Editor
Monaco Editor
VS Code's editor engine with PostgreSQL syntax highlighting
SQL formatting
One‑click format, PostgreSQL dialect, uppercase keywords
Auto‑save
Drafts saved every 5 seconds to localStorage with restore on reload
Keyboard shortcuts
⌘/Ctrl + Enter to execute
Dark theme
Full dark mode UI
Results Panel
Side‑by‑side cloud results
Color‑coded expandable sections per cloud
Table and JSON views
Toggle between formatted table and raw JSON
CSV / JSON export
Download results per cloud
Per‑statement breakdown
Individual results for each statement in a batch
Execution timing
Duration in milliseconds per cloud
Redis Manager
Multi‑cloud Redis
Execute commands across all configured Redis instances simultaneously
UPDATEdual_db_manager.usersSET role ='MASTER', is_active = true
WHERE username ='your-username';
Log out and log back in. You now have full access.
◈ Environment variables
Variable
Default
Description
PORT
3000
Backend server port
NODE_ENV
development
development or production
REDIS_HOST
localhost
Redis hostname
REDIS_PORT
6379
Redis port
REDIS_PASSWORD
—
Redis password (optional)
REDIS_DB
0
Redis database number
SESSION_SECRET
—
Required. Random string for session encryption
FRONTEND_URL
http://localhost:5173
CORS allowed origin
MAX_QUERY_TIMEOUT_MS
300000
Overall query timeout (5 min)
STATEMENT_TIMEOUT_MS
300000
Per‑statement PostgreSQL timeout (5 min)
REDIS_EXECUTION_TTL_SECONDS
300
Async execution state TTL in Redis (5 min)
RUN_MIGRATIONS
false
Auto‑create dual_db_manager schema on startup
◆ Database schema
Migrations auto‑create (when RUN_MIGRATIONS=true) or run manually with npm run migrate.
dual_db_manager.users
Column
Type
Description
id
UUID
Primary key
username
VARCHAR(255)
Unique login name
password_hash
TEXT
bcrypt hash
email
VARCHAR(255)
Unique email
name
VARCHAR(255)
Display name
role
VARCHAR(50)
MASTER, USER, or READER
is_active
BOOLEAN
Account enabled (default: false)
created_at
TIMESTAMP
Registration time
dual_db_manager.query_history
Column
Type
Description
id
UUID
Primary key
user_id
UUID
Foreign key to users
query
TEXT
Executed SQL
database_name
VARCHAR(50)
Target database
execution_mode
VARCHAR(50)
both or specific cloud name
cloud_results
JSONB
Per‑cloud results with success, duration, rows
created_at
TIMESTAMP
Execution time
⚓ Docker
Both services use multi‑stage builds for minimal image size.
# Build (use --platform linux/amd64 when deploying to x86 servers from ARM machines)
docker build --platform linux/amd64 -t multi-cloud-db-backend ./backend
docker build --platform linux/amd64 -t multi-cloud-db-frontend ./frontend
# Run backend
docker run -p 3000:3000 \
--env-file backend/.env \
multi-cloud-db-backend
# Run frontend (BACKEND_URL injected at runtime — no rebuild per environment)
docker run -p 80:80 \
-e BACKEND_URL=http://your-backend:3000 \
multi-cloud-db-frontend
✓ Health checks built in — Backend GET /health · Frontend GET /
☸ Kubernetes
Manifests in k8s/:
File
Description
backend.yaml
Backend Deployment (2 replicas) + Service + liveness/readiness probes
frontend.yaml
Frontend Deployment (2 replicas) + Nginx ConfigMap + Service
secrets.yaml.example
Template for secrets (copy to secrets.yaml and fill in)
1. Fork the repository
2. Create a feature branch git checkout -b feature/my-feature
3. Make your changes
4. Run linting cd backend && npm run lint
cd frontend && npm run lint
5. Commit git commit -m "Add my feature"
6. Push git push origin feature/my-feature
7. Open a Pull Request
Built for teams managing PostgreSQL across multiple clouds.
About
Professional PostgreSQL database management tool for executing queries across multiple cloud providers (AWS, GCP, Azure) with role-based access control and complete audit trails.
{ "primary": { "cloudName": "cloud1", "db_configs": [ { "name": "mydb", "label": "My Database", "host": "localhost", "port": 5432, "user": "postgres", "password": "password", "database": "mydb", "schemas": ["public"], "defaultSchema": "public" } ] }, "secondary": [ { "cloudName": "cloud2", "db_configs": [ { "name": "mydb", "label": "My Database", "host": "remote-host", "port": 5432, "user": "postgres", "password": "${CLOUD2_DB_PASSWORD}", "database": "mydb", "schemas": ["public"], "defaultSchema": "public" } ] } ], "history": { "host": "localhost", "port": 5432, "user": "postgres", "password": "password", "database": "mydb" } }