Run the start script to launch all services:
./start.shThis will:
- Check if Docker is installed (provides installation instructions if not)
- Start all services in the background
- Display connection details and web UI URLs
To stop all services:
./stop.shThis stops all containers but preserves your data in Docker volumes.
To stop services and remove all data:
docker compose down -vAll services can be configured using environment variables. A template is provided in .env.example.
cp .env.example .envThen edit .env to customize:
- Database credentials
- Service ports
- Component versions
Note: The .env file is gitignored by default to keep your credentials safe. All services have sensible defaults, so creating a .env file is optional.
For a more lightweight system, you can disable the web UIs. They're convenient for development but not required for your applications to connect to the services.
To disable web UIs, comment out these services in docker-compose.yml:
pgweb- PostgreSQL web UIredis-commander- Valkey/Redis web UIredpanda-console- Redpanda web UI
This will reduce memory usage and the number of running containers. Your applications can still connect to PostgreSQL, Valkey, and Redpanda using their standard ports.
This setup includes dbmate for managing database migrations. Migrations are stored in the ./migrations directory and run automatically when you start services with ./start.sh.
Note: We generally recommend letting your application handle database migrations (using tools like Flyway, Liquibase, Entity Framework Migrations, etc.) as part of your application's startup process. However, we've provided dbmate here as a standalone tool to help you get started quickly or for cases where you need to manage migrations independently.
When you run ./start.sh, the dbmate container automatically:
- Connects to PostgreSQL
- Applies any pending migrations from the
./migrationsdirectory - Exits after migrations complete
Any .sql files you add to the ./migrations directory will be automatically applied on the next startup.
docker run --rm -it --network host \
-e DATABASE_URL="postgresql://postgres:postgres@localhost:5432/postgres?sslmode=disable" \
-v "$(pwd)/migrations:/db/migrations" \
amacneil/dbmate:2.28.0 new create_users_tableThis creates a new migration file in ./migrations with up and down sections.
docker run --rm -it --network host \
-e DATABASE_URL="postgresql://postgres:postgres@localhost:5432/postgres?sslmode=disable" \
-v "$(pwd)/migrations:/db/migrations" \
amacneil/dbmate:2.28.0 updocker run --rm -it --network host \
-e DATABASE_URL="postgresql://postgres:postgres@localhost:5432/postgres?sslmode=disable" \
-v "$(pwd)/migrations:/db/migrations" \
amacneil/dbmate:2.28.0 downMigrations run automatically when you start services with ./start.sh. The dbmate container will apply any pending migrations and then exit.
Redpanda (Kafka) organizes messages into topics. You can manage topics through the web UI at http://localhost:15082 or using the command line.
docker exec -it redpanda rpk topic create my-topic --partitions 3 --replicas 1docker exec -it redpanda rpk topic listecho "Hello Redpanda" | docker exec -i redpanda rpk topic produce my-topicdocker exec -it redpanda rpk topic consume my-topicNote: Topics are automatically created when you first produce to them (if auto_create_topics_enabled is true, which is the default). However, explicitly creating topics allows you to control partitions and replication settings.
- Host: localhost
- Port: 5432
- User: postgres
- Password: postgres
- Database: postgres
- Connection String:
postgresql://postgres:postgres@localhost:5432/postgres - Web UI: http://localhost:15080 (pgweb)
- Documentation: https://www.postgresql.org/docs/
- Host: localhost
- Port: 6379
- Connection String:
redis://localhost:6379 - Web UI: http://localhost:15081 (Redis Commander)
- Valkey Documentation: https://valkey.io/docs/
- Redis Documentation: https://redis.io/docs/ (Valkey is Redis-compatible)
- Kafka Broker: localhost:19092
- Schema Registry: http://localhost:18081
- Pandaproxy (REST): http://localhost:18082
- Admin API: http://localhost:19644
- Web UI: http://localhost:15082 (Redpanda Console)
- Redpanda Documentation: https://docs.redpanda.com/
- Kafka Documentation: https://kafka.apache.org/documentation/ (Redpanda is Kafka-compatible)
conn, _ := pgx.Connect(ctx, "postgresql://postgres:postgres@localhost:5432/postgres")
var result string
conn.QueryRow(ctx, "SELECT 'Hello from PostgreSQL'").Scan(&result)rdb := redis.NewClient(&redis.Options{Addr: "localhost:6379"})
rdb.Set(ctx, "key", "value", 0)
val, _ := rdb.Get(ctx, "key").Result()// Publish
config := sarama.NewConfig()
producer, _ := sarama.NewSyncProducer([]string{"localhost:19092"}, config)
producer.SendMessage(&sarama.ProducerMessage{Topic: "test", Value: sarama.StringEncoder("hello")})
// Subscribe
consumer, _ := sarama.NewConsumer([]string{"localhost:19092"}, nil)
partitionConsumer, _ := consumer.ConsumePartition("test", 0, sarama.OffsetNewest)
msg := <-partitionConsumer.Messages()var conn = new NpgsqlConnection("Host=localhost;Port=5432;Username=postgres;Password=postgres;Database=postgres");
await conn.OpenAsync();
var cmd = new NpgsqlCommand("SELECT 'Hello from PostgreSQL'", conn);
var result = await cmd.ExecuteScalarAsync();var redis = ConnectionMultiplexer.Connect("localhost:6379");
var db = redis.GetDatabase();
await db.StringSetAsync("key", "value");
var value = await db.StringGetAsync("key");// Publish
var producerConfig = new ProducerConfig { BootstrapServers = "localhost:19092" };
using var producer = new ProducerBuilder<Null, string>(producerConfig).Build();
await producer.ProduceAsync("test", new Message<Null, string> { Value = "hello" });
// Subscribe
var consumerConfig = new ConsumerConfig { BootstrapServers = "localhost:19092", GroupId = "test-group" };
using var consumer = new ConsumerBuilder<Ignore, string>(consumerConfig).Build();
consumer.Subscribe("test");
var msg = consumer.Consume();