diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 64a1b7c..add6671 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -6,6 +6,11 @@ on: pull_request: branches: [ "main" ] +# Prevent duplicate builds on PR branches +concurrency: + group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }} + cancel-in-progress: true + jobs: build-jvm: runs-on: ubuntu-latest diff --git a/docs/pages/_meta.ts b/docs/pages/_meta.ts index aade85a..d92682f 100644 --- a/docs/pages/_meta.ts +++ b/docs/pages/_meta.ts @@ -1,5 +1,6 @@ export default { "index": "Getting started", "stacks": "Working with stacks", + "providers": "Providers", "examples": "Examples", }; diff --git a/docs/pages/providers.mdx b/docs/pages/providers.mdx new file mode 100644 index 0000000..924d5be --- /dev/null +++ b/docs/pages/providers.mdx @@ -0,0 +1,175 @@ +--- +title: Providers Overview +--- + +# Providers + +Nebula providers are components that spin up infrastructure and services for testing and development. Each provider uses TestContainers or native implementations to create isolated, disposable environments. + +## Available Providers + +### HTTP Server +Creates an HTTP server using Ktor for defining REST APIs and web endpoints. +- No Docker required - runs directly in Nebula +- Full support for GET, POST, PUT, DELETE +- Path parameters and request handling + +[View HTTP Documentation →](/providers/http) + +### Kafka +Starts a Kafka broker with support for producing messages in multiple formats. +- JSON, Avro, and Protobuf message support +- Periodic message production +- Configurable topics and partitions + +[View Kafka Documentation →](/providers/kafka) + +### SQL Databases +PostgreSQL and MySQL database providers with schema and data seeding. +- Create tables with DDL +- Seed data on startup +- jOOQ integration for queries + +[View SQL Documentation →](/providers/sql) + +### MongoDB +NoSQL document database for flexible data models. +- Create collections +- Seed documents +- Nested documents and arrays + +[View MongoDB Documentation →](/providers/mongo) + +### S3 +S3-compatible object storage using LocalStack. +- Create buckets +- Upload files (inline, filesystem, streaming) +- AWS SDK integration + +[View S3 Documentation →](/providers/s3) + +### Hazelcast +Distributed in-memory data grid for caching and distributed computing. +- Distributed maps, queues, sets +- Session management +- Event streaming + +[View Hazelcast Documentation →](/providers/hazelcast) + +### Taxi Publisher +Publishes Taxi schema packages to Orbital schema servers. +- Define type systems +- Share schemas +- Configuration management + +[View Taxi Publisher Documentation →](/providers/taxi) + +## Common Patterns + +### Combining Providers + +Nebula allows you to combine multiple providers in a single stack: + +```kotlin +stack { + // Database + postgres { + table("users", """ + CREATE TABLE users (id SERIAL PRIMARY KEY, username VARCHAR(100)) + """, data = listOf( + mapOf("username" to "alice") + )) + } + + // Message broker + kafka { + producer("1s".duration(), "events") { + jsonMessage { + mapOf("event" to "user_created", "user" to "alice") + } + } + } + + // HTTP API + http(port = 8080) { + get("/users") { call -> + val dsl = infra.database.single().dsl + val users = dsl.selectFrom("users").fetch() + call.respondText(users.toString()) + } + } +} +``` + +### Accessing Configuration + +Each provider exposes configuration through the returned infrastructure: + +```kotlin +val infra = stack { + postgres { /* ... */ } + kafka { /* ... */ } + http { /* ... */ } +}.start() + +// Access database configuration +val jdbcUrl = infra.database.single().componentInfo!!.componentConfig.jdbcUrl + +// Access Kafka bootstrap servers +val bootstrapServers = infra.kafka.single().bootstrapServers + +// Access HTTP base URL +val baseUrl = infra.http.single().baseUrl +``` + +### Custom Images + +Most providers support custom Docker images: + +```kotlin +stack { + postgres(imageName = "postgres:15") { /* ... */ } + kafka(imageName = "confluentinc/cp-kafka:7.0.0") { /* ... */ } + mongo(imageName = "mongo:6.0", databaseName = "test") { /* ... */ } + s3(imageName = "localstack/localstack:3.0") { /* ... */ } +} +``` + +### Component Naming + +Assign custom names to components for clarity in multi-component stacks: + +```kotlin +stack { + postgres(componentName = "users-db") { /* ... */ } + postgres(componentName = "orders-db") { /* ... */ } + + kafka(componentName = "events-broker") { /* ... */ } +} +``` + +## Provider Lifecycle + +1. **Configuration**: Define providers in the `stack` block +2. **Start**: Call `.start()` to initialize all providers +3. **Ready**: Providers are running and accessible +4. **Shutdown**: Call `shutDownAll()` to clean up resources + +```kotlin +val infra = stack { + postgres { /* ... */ } + kafka { /* ... */ } +}.start() + +// Use the infrastructure +// ... + +// Clean up +infra.shutDownAll() +``` + +## Next Steps + +- Explore individual provider documentation for detailed examples +- Check out [example stacks](/examples) for real-world usage patterns +- Learn about [working with stacks](/stacks) for lifecycle management diff --git a/docs/pages/providers/_meta.ts b/docs/pages/providers/_meta.ts new file mode 100644 index 0000000..2fd537d --- /dev/null +++ b/docs/pages/providers/_meta.ts @@ -0,0 +1,9 @@ +export default { + "http": "HTTP Server", + "kafka": "Kafka", + "sql": "SQL Databases", + "mongo": "MongoDB", + "s3": "S3", + "hazelcast": "Hazelcast", + "taxi": "Taxi Publisher", +}; diff --git a/docs/pages/providers/hazelcast.mdx b/docs/pages/providers/hazelcast.mdx new file mode 100644 index 0000000..ec5ed34 --- /dev/null +++ b/docs/pages/providers/hazelcast.mdx @@ -0,0 +1,234 @@ +--- +title: Hazelcast +--- + +# Hazelcast + +The `hazelcast` provider creates an in-memory data grid using Hazelcast with the `hazelcast/hazelcast:5` image by default. This is useful for distributed caching, session management, and in-memory computing scenarios. + +## Quick Start + +```kotlin +stack { + hazelcast { + // Basic Hazelcast instance + } +} +``` + +## Configuration + +### Custom Image + +```kotlin +hazelcast(imageName = "hazelcast/hazelcast:5.3") { + // configuration here +} +``` + +### Component Name + +Customize the component name for multiple Hazelcast instances: + +```kotlin +hazelcast(componentName = "cache-cluster") { + // configuration here +} +``` + +## Use Cases + +Hazelcast is typically used for: + +- **Distributed caching**: Store frequently accessed data in memory +- **Session management**: Share session data across multiple application instances +- **Message queues**: Distribute work across multiple consumers +- **Event streaming**: Publish and subscribe to events +- **Distributed computing**: Run computations across a cluster + +## Complete Example + +```kotlin +stack { + hazelcast { + // Starts a single Hazelcast node + } +} +``` + +## Returned Configuration + +When a `hazelcast` component is declared, the following configuration is available: + +| Property | Description | +|----------|-------------| +| `port` | The port Hazelcast is listening on (default: 5701) | + +### Accessing Configuration + +```kotlin +val infra = stack { + hazelcast { + // configuration + } +}.start() + +val hazelcastInfo = infra.hazelcast.single().componentInfo!!.componentConfig +val port = hazelcastInfo.port + +println("Hazelcast running on port: $port") +``` + +## Working with Hazelcast Client + +Connect to the Hazelcast instance using the Hazelcast Java Client: + +```kotlin +import com.hazelcast.client.HazelcastClient +import com.hazelcast.client.config.ClientConfig + +val infra = stack { + hazelcast { + // configuration + } +}.start() + +val port = infra.hazelcast.single().componentInfo!!.componentConfig.port + +// Configure client +val config = ClientConfig().apply { + networkConfig.apply { + addresses.add("localhost:$port") + } +} + +// Create client connection +val client = HazelcastClient.newHazelcastClient(config) + +// Verify connection +println("Connected to cluster with ${client.cluster.members.size} member(s)") + +// Access distributed data structures +val map = client.getMap("my-distributed-map") +map.put("key1", "value1") +println("Value: ${map.get("key1")}") + +client.shutdown() +``` + +## Distributed Data Structures + +Hazelcast provides various distributed data structures: + +### Distributed Map + +```kotlin +val client = HazelcastClient.newHazelcastClient(config) +val map = client.getMap("users") + +map.put("user:1", mapOf("name" to "Alice", "age" to 30)) +map.put("user:2", mapOf("name" to "Bob", "age" to 25)) + +val user = map.get("user:1") +``` + +### Distributed Queue + +```kotlin +val queue = client.getQueue("tasks") + +queue.put("task1") +queue.put("task2") + +val task = queue.take() // Blocks until item available +println("Processing: $task") +``` + +### Distributed Set + +```kotlin +val set = client.getSet("unique-values") + +set.add("value1") +set.add("value2") +set.add("value1") // Duplicate, won't be added + +println("Set size: ${set.size}") // 2 +``` + +### Distributed List + +```kotlin +val list = client.getList("events") + +list.add("event1") +list.add("event2") + +list.forEach { event -> + println("Event: $event") +} +``` + +## Integration Example + +Combining Hazelcast with other Nebula components: + +```kotlin +stack { + // Start Hazelcast for caching + hazelcast { + // configuration + } + + // Start HTTP server + http { + val hazelcastPort = infra.hazelcast.single().componentInfo!!.componentConfig.port + val clientConfig = ClientConfig().apply { + networkConfig.addresses.add("localhost:$hazelcastPort") + } + val hazelcastClient = HazelcastClient.newHazelcastClient(clientConfig) + val cache = hazelcastClient.getMap("api-cache") + + get("/data/{key}") { call -> + val key = call.parameters["key"]!! + + // Check cache first + val cachedValue = cache.get(key) + if (cachedValue != null) { + call.respondText("Cached: $cachedValue") + } else { + // Simulate data fetch + val value = "Data for $key" + cache.put(key, value) + call.respondText("Fresh: $value") + } + } + } +} +``` + +## Configuration Options + +While the Hazelcast provider starts a basic instance, you can configure advanced settings through the Hazelcast client: + +```kotlin +val config = ClientConfig().apply { + networkConfig.apply { + addresses.add("localhost:$port") + connectionTimeout = 5000 + connectionAttemptLimit = 3 + } + + connectionStrategyConfig.apply { + connectionRetryConfig.apply { + clusterConnectTimeoutMillis = 10000 + } + } +} +``` + +## Resources + +- [Hazelcast Documentation](https://docs.hazelcast.com/) +- [Hazelcast Client API](https://docs.hazelcast.com/hazelcast/latest/clients/java) +- [Distributed Data Structures](https://docs.hazelcast.com/hazelcast/latest/data-structures/overview) diff --git a/docs/pages/providers/http.mdx b/docs/pages/providers/http.mdx index 0a8a9bb..a388b32 100644 --- a/docs/pages/providers/http.mdx +++ b/docs/pages/providers/http.mdx @@ -1,32 +1,130 @@ -## HTTP +--- +title: HTTP Server +--- -The `http` block declares an HTTP server +# HTTP Server -Note: Unlike most blocks in a stack, this block does not use Docker - it uses [ktor](https://ktor.io/), -as the Nebula http engine is already running Ktor. +The `http` provider creates an HTTP server with custom routes using [Ktor](https://ktor.io/). Unlike most Nebula providers, this doesn't spin up a Docker container—it runs directly within the Nebula runtime. -This is a thin wrapper around [Ktor routes](https://ktor.io/docs/server-routing.html#define_route). See the Ktor docs for more information. +## Quick Start + +```kotlin +stack { + http { + get("/hello") { call -> + call.respondText("Hello, World!") + } + } +} +``` + +## Configuration + +### Port Configuration + +By default, Nebula assigns a random available port. Specify a custom port if needed: + +```kotlin +http(port = 9000) { + // routes here +} +``` + +## Defining Routes + +The HTTP provider supports all standard HTTP methods: `GET`, `POST`, `PUT`, `DELETE`. Routes are defined using Ktor's routing DSL. + +### GET Requests + +```kotlin +http { + // Simple text response + get("/hello") { call -> + call.respondText("Hello, World!") + } + + // Path parameters + get("/users/{id}") { call -> + val id = call.parameters["id"] + call.respondText("User $id") + } + + // No content response + get("/health") { call -> + call.respond(HttpStatusCode.NoContent) + } +} +``` + +### POST Requests + +```kotlin +http { + post("/echo") { call -> + val body = call.receiveText() + call.respondText(body) + } + + // JSON handling + post("/users") { call -> + val body = call.receiveText() + call.respondText("Created user with data: $body") + } +} +``` + +### PUT Requests + +```kotlin +http { + put("/update/{id}") { call -> + val id = call.parameters["id"] + val body = call.receiveText() + call.respondText("Updated user $id with $body") + } +} +``` + +### DELETE Requests + +```kotlin +http { + delete("/delete/{id}") { call -> + val id = call.parameters["id"] + call.respondText("Deleted user $id", status = HttpStatusCode.NoContent) + } +} +``` + +## Complete Example ```kotlin stack { - // port is optional, will pick a random port http(port = 9000) { get("/hello") { call -> call.respondText("Hello, World!") } + + get("/") { call -> + call.respond(HttpStatusCode.NoContent) + } + post("/echo") { call -> val body = call.receiveText() call.respondText(body) } + get("/users/{id}") { call -> val id = call.parameters["id"] call.respondText("User $id") } + put("/update/{id}") { call -> val id = call.parameters["id"] val body = call.receiveText() call.respondText("Updated user $id with $body") } + delete("/delete/{id}") { call -> val id = call.parameters["id"] call.respondText("Deleted user $id", status = HttpStatusCode.NoContent) @@ -35,3 +133,40 @@ stack { } ``` +## Returned Configuration + +When an `http` component is declared, the following configuration is available: + +| Property | Description | +|----------|-------------| +| `baseUrl` | The full base URL of the HTTP server (e.g., `http://localhost:9000`) | +| `port` | The port the server is listening on | + +### Accessing Configuration + +```kotlin +val infra = stack { + http { + get("/hello") { call -> + call.respondText("Hello!") + } + } +}.start() + +val baseUrl = infra.http.single().baseUrl +// Use baseUrl to make requests: http://localhost:xxxxx +``` + +## Working with Ktor + +The HTTP provider is a thin wrapper around [Ktor routes](https://ktor.io/docs/server-routing.html). You have full access to Ktor's capabilities: + +- **Request handling**: `call.receiveText()`, `call.receive()` +- **Response methods**: `call.respondText()`, `call.respond()`, `call.respondJson()` +- **Path parameters**: `call.parameters["name"]` +- **Query parameters**: `call.request.queryParameters["name"]` +- **Headers**: `call.request.headers["Header-Name"]` +- **Status codes**: `HttpStatusCode.*` + +See the [Ktor documentation](https://ktor.io/docs/server-routing.html) for more advanced features. + diff --git a/docs/pages/providers/kafka.mdx b/docs/pages/providers/kafka.mdx index 577b1fa..313c8d9 100644 --- a/docs/pages/providers/kafka.mdx +++ b/docs/pages/providers/kafka.mdx @@ -1,18 +1,12 @@ -## Kafka +--- +title: Kafka +--- -The `kafka` block declares a Kafka broker, using the `confluentinc/cp-kafka` image by default. +# Kafka -### Producing messages -To declare a producer, use `producer` block, which emits messages periodically. +The `kafka` provider starts a Kafka broker using TestContainers with the `confluentinc/cp-kafka:6.2.2` image by default. It supports producing messages in various formats: JSON, Avro, and Protobuf. -`producer` takes the following args: - -| Arg | Description | -|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `frequency` | A [Kotlin Duration](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.time/-duration/) indicating how frequently the producer closure should be called.
| -| `topic` | The topic to write to | - -The body is a function which returns a message to be written to Kafka +## Quick Start ```kotlin stack { @@ -29,9 +23,223 @@ stack { } ``` -### Returned values -When a `kafka` component is declared, the following data is returned: +## Configuration + +### Custom Image + +```kotlin +kafka(imageName = "confluentinc/cp-kafka:7.0.0") { + // configuration here +} +``` -| Key | Description | -|--------------------|-----------------------------------------------------------| -| `bootstrapServers` | The bootstrap servers address of the started Kafka broker | \ No newline at end of file +## Producing Messages + +Producers emit messages to Kafka topics at specified intervals. The `producer` function takes the following parameters: + +| Parameter | Type | Description | +|-----------|------|-------------| +| `frequency` | Duration | How often to emit messages (e.g., `"100ms".duration()`) | +| `topic` | String | The Kafka topic to write to | +| `partitions` | Int | Number of partitions for the topic (default: 1) | +| `keySerializer` | MessageSerializer | Serializer for message keys (default: String) | +| `valueSerializer` | MessageSerializer | Serializer for message values (default: String) | + +### JSON Messages + +The most common format for messages. The producer function returns a Map or any object that can be serialized to JSON. + +```kotlin +kafka { + producer("100ms".duration(), "stockQuotes") { + jsonMessage { + mapOf( + "symbol" to listOf("GBP/USD", "AUD/USD", "NZD/USD").random(), + "price" to Random.nextDouble(0.8, 0.95).toBigDecimal(), + "timestamp" to Instant.now().toString() + ) + } + } +} +``` + +### Producing Multiple Messages + +Use `jsonMessages` to emit multiple messages in a single call: + +```kotlin +kafka { + producer("1s".duration(), "orders") { + jsonMessages { + listOf( + mapOf("orderId" to 1, "status" to "pending"), + mapOf("orderId" to 2, "status" to "shipped"), + mapOf("orderId" to 3, "status" to "delivered") + ) + } + } +} +``` + +### Avro Messages + +Nebula supports binary Avro encoding from JSON data. Define an Avro schema and produce messages from standard JSON: + +```kotlin +val schema = avroSchema(""" +{ + "type": "record", + "name": "StockQuote", + "namespace": "com.example", + "fields": [ + {"name": "symbol", "type": "string"}, + {"name": "price", "type": "double"}, + {"name": "timestamp", "type": {"type": "long", "logicalType": "timestamp-millis"}} + ] +} +""".trimIndent()) + +kafka { + producer("100ms".duration(), "quotes", valueSerializer = MessageSerializer.ByteArray) { + avroMessage(schema) { + """ + { + "symbol": "AAPL", + "price": 150.25, + "timestamp": 1654027200000 + } + """ + } + } +} +``` + +#### Avro JSON Format + +For Avro-specific JSON (with union types), use `avroJsonMessage`: + +```kotlin +kafka { + producer("100ms".duration(), "quotes", valueSerializer = MessageSerializer.ByteArray) { + avroJsonMessage(schema) { + """ + { + "symbol": "AAPL", + "price": {"double": 150.25}, + "timestamp": 1654027200000 + } + """ + } + } +} +``` + +### Protobuf Messages + +Produce binary Protobuf messages from JSON data: + +```kotlin +val schema = protobufSchema(""" +syntax = "proto3"; + +message Quote { + string symbol = 1; + double price = 2; + int64 timestamp = 3; +} +""".trimIndent()) + +kafka { + producer("100ms".duration(), "quotes", valueSerializer = MessageSerializer.ByteArray) { + protoMessage(schema, "Quote") { + mapOf( + "symbol" to "AAPL", + "price" to 150.25, + "timestamp" to System.currentTimeMillis() + ) + } + } +} +``` + +### Multiple Protobuf Messages + +```kotlin +kafka { + producer("1s".duration(), "quotes", valueSerializer = MessageSerializer.ByteArray) { + protoMessages(schema, "Quote") { + listOf( + mapOf("symbol" to "AAPL", "price" to 150.25), + mapOf("symbol" to "GOOGL", "price" to 2800.50) + ) + } + } +} +``` + +## Complete Example + +```kotlin +stack { + kafka { + // JSON producer + producer("500ms".duration(), "orders") { + jsonMessage { + mapOf( + "orderId" to UUID.randomUUID().toString(), + "amount" to Random.nextDouble(10.0, 1000.0), + "status" to "pending" + ) + } + } + + // Avro producer + val schema = avroSchema("""...""") + producer("200ms".duration(), "events", valueSerializer = MessageSerializer.ByteArray) { + avroMessage(schema) { + """{"eventId": "123", "type": "click"}""" + } + } + } +} +``` + +## Returned Configuration + +When a `kafka` component is declared, the following configuration is available: + +| Property | Description | +|----------|-------------| +| `bootstrapServers` | The bootstrap servers address (e.g., `PLAINTEXT://172.17.0.1:49154`) | + +### Accessing Configuration + +```kotlin +val infra = stack { + kafka { + producer("100ms".duration(), "test") { + jsonMessage { mapOf("data" to "value") } + } + } +}.start() + +val bootstrapServers = infra.kafka.single().bootstrapServers +// Use with Kafka clients +``` + +## Message Serializers + +Nebula provides two built-in serializers: + +- `MessageSerializer.String` - For string messages (default) +- `MessageSerializer.ByteArray` - For binary data (Avro, Protobuf) + +Specify the serializer in the `producer` call: + +```kotlin +producer("100ms".duration(), "topic", + keySerializer = MessageSerializer.String, + valueSerializer = MessageSerializer.ByteArray) { + // message generation +} +``` \ No newline at end of file diff --git a/docs/pages/providers/mongo.mdx b/docs/pages/providers/mongo.mdx index 3df9364..8acfd28 100644 --- a/docs/pages/providers/mongo.mdx +++ b/docs/pages/providers/mongo.mdx @@ -1,43 +1,293 @@ -## Mongo -Nebula currently supports the NoSQL database MongoDB. +--- +title: MongoDB +--- - * `mongodb` : Declares an image using the `mongo:7` image by default +# MongoDB + +The `mongo` provider creates a MongoDB instance using TestContainers with the `mongo:7.0.16` image by default. It supports creating collections and seeding them with document data. + +## Quick Start + +```kotlin +stack { + mongo(databaseName = "myapp") { + collection("users", data = listOf( + mapOf("name" to "Alice", "age" to 30), + mapOf("name" to "Bob", "age" to 25) + )) + } +} +``` + +## Configuration + +### Custom Image + +```kotlin +mongo(imageName = "mongo:6.0", databaseName = "myapp") { + // configuration here +} +``` + +### Database Name + +The `databaseName` parameter is **required** and specifies which database to use: + +```kotlin +mongo(databaseName = "testDb") { + // collections here +} +``` + +## Creating Collections + +Use the `collection` function to create a MongoDB collection and optionally seed it with documents. + +### Parameters + +| Parameter | Type | Description | +|-----------|------|-------------| +| `name` | String | The name of the collection | +| `data` | List> | Optional list of documents to insert | + +### Empty Collection + +```kotlin +mongo(databaseName = "testDb") { + collection("users") +} +``` + +### Collection with Documents + +```kotlin +mongo(databaseName = "testDb") { + collection("people", data = listOf( + mapOf( + "name" to "Jimmy", + "age" to 25, + "email" to "[email protected]" + ), + mapOf( + "name" to "Jack", + "age" to 43, + "email" to "[email protected]" + ) + )) +} +``` + +### Multiple Collections + +```kotlin +mongo(databaseName = "testDb") { + collection("users", data = listOf( + mapOf("username" to "alice", "role" to "admin"), + mapOf("username" to "bob", "role" to "user") + )) + + collection("products", data = listOf( + mapOf( + "name" to "Widget", + "price" to 19.99, + "inStock" to true + ), + mapOf( + "name" to "Gadget", + "price" to 29.99, + "inStock" to false + ) + )) +} +``` + +## Document Structure + +MongoDB documents are represented as `Map`. Nested documents and arrays are supported: + +### Nested Documents + +```kotlin +mongo(databaseName = "testDb") { + collection("customers", data = listOf( + mapOf( + "name" to "Alice Smith", + "address" to mapOf( + "street" to "123 Main St", + "city" to "New York", + "zipCode" to "10001" + ), + "age" to 30 + ) + )) +} +``` + +### Arrays ```kotlin -// use the default image. -mongo { - // definition goes here +mongo(databaseName = "testDb") { + collection("orders", data = listOf( + mapOf( + "orderId" to "ORD-001", + "items" to listOf("item1", "item2", "item3"), + "tags" to listOf("priority", "express") + ) + )) } +``` + +### Complex Example -// custom image -mongo(imageName = "mongo:6") { - // definition goes here +```kotlin +mongo(databaseName = "testDb") { + collection("employees", data = listOf( + mapOf( + "name" to "John Doe", + "age" to 35, + "department" to "Engineering", + "skills" to listOf("Kotlin", "Java", "Python"), + "address" to mapOf( + "street" to "456 Tech Blvd", + "city" to "San Francisco", + "state" to "CA" + ), + "projects" to listOf( + mapOf("name" to "Project A", "status" to "active"), + mapOf("name" to "Project B", "status" to "completed") + ), + "salary" to 120000, + "isActive" to true + ) + )) } ``` -## Defining collections -You can run DDL to create collections, and populate them with data by calling the `collection()` -function: +## Complete Example ```kotlin -mongo { - collection( - "people", data = listOf( +stack { + mongo(imageName = "mongo:7.0", databaseName = "testDb") { + collection("people", data = listOf( mapOf( "name" to "Jimmy", - "age" to 25 + "age" to 25, + "email" to "[email protected]", + "interests" to listOf("coding", "gaming") ), mapOf( "name" to "Jack", - "age" to 43 + "age" to 43, + "email" to "[email protected]", + "interests" to listOf("reading", "hiking") ) - ) - ) + )) + + collection("products", data = listOf( + mapOf( + "name" to "Laptop", + "category" to "Electronics", + "price" to 999.99, + "inStock" to true, + "specs" to mapOf( + "ram" to "16GB", + "storage" to "512GB SSD" + ) + ) + )) + } +} +``` + +## Returned Configuration + +When a MongoDB component is declared, the following configuration is available: + +| Property | Description | +|----------|-------------| +| `connectionString` | The MongoDB connection string (e.g., `mongodb://localhost:49153/testDb`) | +| `port` | The port MongoDB is listening on | + +### Accessing Configuration + +```kotlin +val infra = stack { + mongo(databaseName = "testDb") { + collection("users", data = listOf( + mapOf("name" to "Alice", "age" to 30) + )) + } +}.start() + +val mongoConfig = infra.mongo.single().componentInfo!!.componentConfig +val connectionString = mongoConfig.connectionString +val port = mongoConfig.port + +println("MongoDB available at: $connectionString") +``` + +## Working with MongoDB Client + +You can connect to the MongoDB instance using the MongoDB Java Driver: + +```kotlin +import com.mongodb.client.MongoClients + +val infra = stack { + mongo(databaseName = "testDb") { + collection("people", data = listOf( + mapOf("name" to "Jimmy", "age" to 25), + mapOf("name" to "Jack", "age" to 43) + )) + } +}.start() + +val connectionString = infra.mongo.single().componentInfo!!.componentConfig.connectionString +val mongoClient = MongoClients.create(connectionString) +val database = mongoClient.getDatabase("testDb") +val collection = database.getCollection("people") + +// Query all documents +val allDocuments = collection.find().toList() +allDocuments.forEach { doc -> + println("Name: ${doc.getString("name")}, Age: ${doc.getInteger("age")}") } + +// Filter documents +val youngPeople = collection.find( + com.mongodb.client.model.Filters.lt("age", 30) +).toList() + +mongoClient.close() ``` -### Returned values -When a MongoDB component is declared, the following data is returned: +## Data Types + +MongoDB supports various data types through the Map structure: + +- **Strings**: `String` +- **Numbers**: `Int`, `Long`, `Double`, `BigDecimal` +- **Booleans**: `Boolean` +- **Dates**: `java.util.Date`, `java.time.Instant` +- **Arrays**: `List` +- **Nested Documents**: `Map` +- **Null values**: `null` -* `connectionString` -* `port` +Example with various types: + +```kotlin +mongo(databaseName = "testDb") { + collection("records", data = listOf( + mapOf( + "stringField" to "text", + "intField" to 42, + "doubleField" to 3.14, + "boolField" to true, + "dateField" to Date(), + "arrayField" to listOf(1, 2, 3), + "objectField" to mapOf("nested" to "value"), + "nullField" to null + ) + )) +} +``` diff --git a/docs/pages/providers/s3.mdx b/docs/pages/providers/s3.mdx index afd6156..67fb985 100644 --- a/docs/pages/providers/s3.mdx +++ b/docs/pages/providers/s3.mdx @@ -1,26 +1,265 @@ -## S3 +--- +title: S3 +--- -The `s3` block declares an s3 bucket, using the `localstack/localstack` image. +# S3 -### Declaring a bucket +The `s3` provider creates an S3-compatible object storage using LocalStack. It supports creating buckets and uploading files in various ways: inline content, file system paths, or streaming sequences. + +## Quick Start + +```kotlin +stack { + s3 { + bucket("test-bucket") { + file("hello.txt", "Hello, world") + } + } +} +``` + +## Configuration + +### Custom Image + +```kotlin +s3(imageName = "localstack/localstack:3.0") { + // configuration here +} +``` + +## Creating Buckets + +Use the `bucket` function to create an S3 bucket and optionally add files to it. + +### Empty Bucket + +```kotlin +s3 { + bucket("my-bucket") { + // Empty bucket + } +} +``` + +### Bucket with Files + +```kotlin +s3 { + bucket("test-bucket") { + file("hello.txt", "Hello, world") + file("data.json", """{"key": "value"}""") + } +} +``` + +## Adding Files + +Nebula provides three ways to add files to S3 buckets: + +### 1. Inline Content + +Create files with inline string content: + +```kotlin +s3 { + bucket("test-bucket") { + // Simple text file + file("hello.txt", "Hello, world") + + // JSON content + file("config.json", """ + { + "setting1": "value1", + "setting2": "value2" + } + """) + + // CSV content + file("data.csv", """ + name,age,city + Alice,30,New York + Bob,25,San Francisco + """) + } +} +``` + +### 2. File System Path + +Upload files from the local file system: + +```kotlin +s3 { + bucket("test-bucket") { + // Upload a file from the file system + // The file will be stored with its original filename + file("/path/to/local/file.csv") + } +} +``` + +### 3. Sequence-Based Streaming + +For large datasets, use a Kotlin Sequence to generate content on-the-fly. This is efficient for creating large files without loading everything into memory: + +```kotlin +s3 { + bucket("test-bucket") { + // Generate a large CSV file using a sequence + val sequence = sequence { + var rowCount = 0 + val maxRows = 100_000 + + while (rowCount < maxRows) { + val row = (0..20).map { + Random.nextInt(100_000, 999_999) + }.joinToString(",", postfix = "\n") + + rowCount++ + yield(row) + } + } + + file("large-dataset.csv", sequence) + } +} +``` + +## Complete Example ```kotlin stack { s3 { bucket("test-bucket") { - // Creates a file named hello.txt with the content of Hello World - // in a bucket named test-bucket + // Inline text content file("hello.txt", "Hello, world") + + // JSON configuration + file("config.json", """ + { + "database": "postgres", + "port": 5432 + } + """) + } + + bucket("data-bucket") { + // Large file from sequence + val largeDataset = sequence { + repeat(1_000_000) { + yield("row-$it,value-$it\n") + } + } + file("large-data.csv", largeDataset) } } } ``` -### Returned values -When an `s3` component is declared, the following data is returned: +## Returned Configuration + +When an `s3` component is declared, the following configuration is available: + +| Property | Description | +|----------|-------------| +| `accessKey` | The AWS access key for authentication | +| `secretKey` | The AWS secret key for authentication | +| `endpointOverride` | The LocalStack S3 endpoint URL | +| `s3Client` | An AWS S3 SDK client configured for LocalStack | + +### Accessing Configuration + +```kotlin +val infra = stack { + s3 { + bucket("test-bucket") { + file("hello.txt", "Hello, world") + } + } +}.start() + +val s3Config = infra.s3.single().componentInfo!!.componentConfig +val accessKey = s3Config.accessKey +val secretKey = s3Config.secretKey +val endpointOverride = s3Config.endpointOverride + +// Use the pre-configured S3 client +val s3Client = infra.s3.single().s3Client +``` + +## Reading Files + +You can read files back from S3 using the configured client: + +```kotlin +val infra = stack { + s3 { + bucket("test-bucket") { + file("hello.txt", "Hello, world") + } + } +}.start() + +// Read file content +val content = infra.s3.single().getObjectContent("test-bucket", "hello.txt") +println(content) // Outputs: Hello, world +``` + +## Working with AWS SDK + +The S3 provider includes a pre-configured AWS S3 SDK client: + +```kotlin +val infra = stack { + s3 { + bucket("test-bucket") { + file("data.txt", "sample data") + } + } +}.start() + +val s3Client = infra.s3.single().s3Client + +// List objects +val listResponse = s3Client.listObjectsV2 { + bucket = "test-bucket" +} + +listResponse.contents?.forEach { obj -> + println("Object: ${obj.key}, Size: ${obj.size}") +} + +// Get object metadata +val headResponse = s3Client.headObject { + bucket = "test-bucket" + key = "data.txt" +} +println("Content length: ${headResponse.contentLength}") +``` + +## Advanced: Sequence Upload Configuration + +When using sequences for large files, you can configure the upload behavior: + +```kotlin +// Note: Configuration is done internally via SequenceResource +// Default buffer size is 5MB (minimum for multipart uploads) +val sequence = sequence { + // Generate large dataset + repeat(100_000) { + yield("data-row-$it\n") + } +} + +s3 { + bucket("big-data") { + file("dataset.csv", sequence) + } +} +``` -| Key | Description | -|--------------------|------------------------------------------------| -| `accessKey` | The access key of the running AWS stack | -| `secretKey` | The secret key of the running AWS stack | -| `endpointOverride` | The endpoint override of the running AWS stack | \ No newline at end of file +The sequence uploader automatically handles: +- Multipart uploads for large files +- Memory-efficient streaming +- Proper cleanup and error handling \ No newline at end of file diff --git a/docs/pages/providers/sql.mdx b/docs/pages/providers/sql.mdx index ea48e64..62a4def 100644 --- a/docs/pages/providers/sql.mdx +++ b/docs/pages/providers/sql.mdx @@ -1,29 +1,96 @@ -## SQL -There are various SQL providers available, which all share a common DSL. +--- +title: SQL Databases +--- - * `postgres` : Declares an image using the `postgres:13` image by default - * `mysql`: Declares an image using the `mysql:9` image by default +# SQL Databases + +Nebula provides SQL database providers for PostgreSQL and MySQL using TestContainers. Both providers share a common DSL for defining tables and seeding data. + +## Available Providers + +- **PostgreSQL** - Default image: `postgres:13` +- **MySQL** - Default image: `mysql:9` + +## Quick Start + +### PostgreSQL ```kotlin -// use the default image. -postgres { +stack { + postgres { + table("users", """ + CREATE TABLE users ( + id SERIAL PRIMARY KEY, + username VARCHAR(100) NOT NULL, + email VARCHAR(255) NOT NULL + ) + """, data = listOf( + mapOf("username" to "alice", "email" to "[email protected]"), + mapOf("username" to "bob", "email" to "[email protected]") + )) + } +} +``` + +### MySQL + +```kotlin +stack { + mysql { + table("products", """ + CREATE TABLE products ( + id INT PRIMARY KEY AUTO_INCREMENT, + name VARCHAR(100) NOT NULL, + price DECIMAL(10, 2) + ) + """, data = listOf( + mapOf("name" to "Widget", "price" to BigDecimal("9.99")) + )) + } +} +``` + +## Configuration + +### Custom Image + +```kotlin +// PostgreSQL with specific version +postgres(imageName = "postgres:15") { // definition goes here } -// custom image -postgres(imageName = "postgres:12") { +// MySQL with specific version +mysql(imageName = "mysql:8.0") { // definition goes here } ``` -## Defining tables -You can run DDL to create tables, and populate them with data by calling the `table()` -function: +### Custom Database Name + +```kotlin +postgres(databaseName = "myapp") { + // tables here +} +``` + +## Defining Tables + +The `table` function creates a table and optionally seeds it with data. + +### Parameters + +| Parameter | Type | Description | +|-----------|------|-------------| +| `name` | String | The name of the table | +| `ddl` | String | SQL DDL statement to create the table | +| `data` | List> | Optional list of rows to insert | + +### Basic Example ```kotlin postgres { - table( - "users", """ + table("users", """ CREATE TABLE users ( id UUID PRIMARY KEY, username VARCHAR(100) NOT NULL, @@ -32,12 +99,139 @@ postgres { login_count INTEGER, balance DECIMAL(10, 2) ) - """, - data = listOf( + """) +} +``` + +### With Data + +```kotlin +val now = Instant.now() + +postgres { + table("users", """ + CREATE TABLE users ( + id UUID PRIMARY KEY, + username VARCHAR(100) NOT NULL, + created_at TIMESTAMP WITH TIME ZONE, + is_active BOOLEAN, + login_count INTEGER, + balance DECIMAL(10, 2) + ) + """, data = listOf( + mapOf( + "id" to UUID.randomUUID(), + "username" to "john_doe", + "created_at" to now, + "is_active" to true, + "login_count" to 5, + "balance" to BigDecimal("100.50") + ), + mapOf( + "id" to UUID.randomUUID(), + "username" to "jane_smith", + "created_at" to now.minusSeconds(3600), + "is_active" to false, + "login_count" to 2, + "balance" to BigDecimal("75.25") + ) + )) +} +``` + +### Multiple Tables + +```kotlin +postgres { + table("users", """ + CREATE TABLE users ( + id SERIAL PRIMARY KEY, + username VARCHAR(100) NOT NULL + ) + """, data = listOf( + mapOf("username" to "alice"), + mapOf("username" to "bob") + )) + + table("products", """ + CREATE TABLE products ( + id SERIAL PRIMARY KEY, + name VARCHAR(100) NOT NULL, + description TEXT, + price DECIMAL(10, 2) NOT NULL + ) + """, data = listOf( + mapOf( + "name" to "Widget", + "description" to "A fantastic widget", + "price" to BigDecimal("9.99") + ), + mapOf( + "name" to "Gadget", + "description" to "An amazing gadget", + "price" to BigDecimal("24.99") + ) + )) +} +``` + +## Supported Data Types + +Nebula automatically handles type mapping for common SQL types: + +- **Strings**: `VARCHAR`, `TEXT`, `CHAR` +- **Numbers**: `INTEGER`, `BIGINT`, `DECIMAL`, `NUMERIC`, `DOUBLE`, `FLOAT` +- **Booleans**: `BOOLEAN` +- **Dates/Times**: `TIMESTAMP`, `DATE`, `TIME` +- **UUIDs**: `UUID` (PostgreSQL) +- **Binary**: `BYTEA`, `BLOB` + +### Type Examples + +```kotlin +postgres { + table("complex_types", """ + CREATE TABLE complex_types ( + uuid_field UUID, + text_field TEXT, + int_field INTEGER, + decimal_field DECIMAL(10, 2), + bool_field BOOLEAN, + timestamp_field TIMESTAMP WITH TIME ZONE, + json_field JSONB + ) + """, data = listOf( + mapOf( + "uuid_field" to UUID.randomUUID(), + "text_field" to "Sample text", + "int_field" to 42, + "decimal_field" to BigDecimal("123.45"), + "bool_field" to true, + "timestamp_field" to Instant.now() + ) + )) +} +``` + +## Complete Example + +```kotlin +stack { + postgres(databaseName = "testDb") { + table("users", """ + CREATE TABLE users ( + id UUID PRIMARY KEY, + username VARCHAR(100) NOT NULL, + created_at TIMESTAMP WITH TIME ZONE, + is_active BOOLEAN, + login_count INTEGER, + balance DECIMAL(10, 2) + ) + """, data = listOf( mapOf( "id" to UUID.randomUUID(), "username" to "john_doe", - "created_at" to now, + "created_at" to Instant.now(), "is_active" to true, "login_count" to 5, "balance" to BigDecimal("100.50") @@ -45,25 +239,22 @@ postgres { mapOf( "id" to UUID.randomUUID(), "username" to "jane_smith", - "created_at" to now.minusSeconds(3600), + "created_at" to Instant.now().minusSeconds(3600), "is_active" to false, "login_count" to 2, "balance" to BigDecimal("75.25") ) - ) - ) - - table( - "products", """ - CREATE TABLE products ( - id SERIAL PRIMARY KEY, - name VARCHAR(100) NOT NULL, - description TEXT, - price DECIMAL(10, 2) NOT NULL, - created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP - ) - """, data = listOf( + )) + table("products", """ + CREATE TABLE products ( + id SERIAL PRIMARY KEY, + name VARCHAR(100) NOT NULL, + description TEXT, + price DECIMAL(10, 2) NOT NULL, + created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP + ) + """, data = listOf( mapOf( "name" to "Widget", "description" to "A fantastic widget", @@ -74,16 +265,71 @@ postgres { "description" to "An amazing gadget", "price" to BigDecimal("24.99") ) - ) - ) + )) + } } ``` -### Returned values -When any database component is declared, the following data is returned: +## Returned Configuration + +When a database component is declared, the following configuration is available: + +| Property | Description | +|----------|-------------| +| `databaseName` | The name of the database | +| `jdbcUrl` | The JDBC connection URL | +| `username` | The database username | +| `password` | The database password | +| `port` | The port the database is listening on | +| `dsl` | A jOOQ DSLContext for querying the database | + +### Accessing Configuration + +```kotlin +val infra = stack { + postgres { + table("users", """...""", data = listOf(...)) + } +}.start() + +val db = infra.database.single() +val jdbcUrl = db.componentInfo!!.componentConfig.jdbcUrl +val dsl = db.dsl + +// Query using jOOQ +val users = dsl.selectFrom("users").fetch() +users.forEach { user -> + println(user["username"]) +} +``` + +## Working with jOOQ + +Nebula provides a jOOQ `DSLContext` for each database, allowing type-safe SQL queries: + +```kotlin +val infra = stack { + postgres { + table("users", """...""", data = listOf(...)) + } +}.start() + +val dsl = infra.database.single().dsl + +// Select all users +val users = dsl.selectFrom("users").fetch() + +// Filter results +val activeUsers = dsl.selectFrom("users") + .where("is_active = true") + .fetch() + +// Access fields +users.forEach { record -> + val username = record["username"] as String + val loginCount = record["login_count"] as Int + println("$username has logged in $loginCount times") +} +``` - * `databaseName` - * `jdbcUrl` - * `username` - * `password` - * `port` \ No newline at end of file +See the [jOOQ documentation](https://www.jooq.org/doc/latest/manual/) for more advanced query capabilities. \ No newline at end of file diff --git a/docs/pages/providers/taxi.mdx b/docs/pages/providers/taxi.mdx new file mode 100644 index 0000000..9b3623d --- /dev/null +++ b/docs/pages/providers/taxi.mdx @@ -0,0 +1,344 @@ +--- +title: Taxi Publisher +--- + +# Taxi Publisher + +The `taxiPublisher` provider publishes Taxi schema packages to a remote Taxi schema server. This is useful for sharing type definitions and schemas with [Orbital](https://orbitalhq.com) or other Taxi-based systems. + +## Quick Start + +```kotlin +stack { + taxiPublisher("http://localhost:9022", "com.example/myschema/1.0.0") { + taxi("types.taxi") { + """ + type PersonName inherits String + type EmailAddress inherits String + + model Person { + name: PersonName + email: EmailAddress + age: Int + } + """ + } + } +} +``` + +## Configuration + +The `taxiPublisher` function requires two parameters: + +| Parameter | Type | Description | +|-----------|------|-------------| +| `url` | String | The base URL of the Taxi schema server (e.g., `http://localhost:9022`) | +| `packageUri` | String | Package identifier in format `org/name/version` (e.g., `com.example/myschema/1.0.0`) | + +## Adding Taxi Sources + +Use the `taxi` function to define Taxi schema files that will be published. + +### Single Schema File + +```kotlin +taxiPublisher("http://localhost:9022", "com.example/myschema/1.0.0") { + taxi("types.taxi") { + """ + type UserId inherits String + type Username inherits String + + model User { + id: UserId + username: Username + createdAt: Instant + } + """ + } +} +``` + +### Multiple Schema Files + +```kotlin +taxiPublisher("http://localhost:9022", "com.example/myschema/1.0.0") { + taxi("types.taxi") { + """ + type PersonName inherits String + type EmailAddress inherits String + """ + } + + taxi("models.taxi") { + """ + model Person { + name: PersonName + email: EmailAddress + age: Int + } + + model Company { + name: String + employees: Person[] + } + """ + } + + taxi("services.taxi") { + """ + service PersonService { + operation findPerson(PersonName):Person + operation listPeople():Person[] + } + """ + } +} +``` + +## Adding Additional Sources + +Beyond Taxi schemas, you can include additional source files (like configuration files) using the `additionalSource` function. + +### Parameters + +| Parameter | Type | Description | +|-----------|------|-------------| +| `sourceKind` | String | The type of additional source (e.g., `@orbital/config`) | +| `path` | String | The file path within the package | +| `content` | Lambda returning String | Function that returns the file content | + +### Example with Additional Sources + +```kotlin +taxiPublisher("http://localhost:9022", "com.example/myschema/1.0.0") { + // Taxi schema + taxi("schema.taxi") { + """ + type PersonName inherits String + model Person { + name: PersonName + } + """ + } + + // Orbital configuration + additionalSource("@orbital/config", "connections.conf") { + """ + connections { + myDatabase { + connectionString = "jdbc:postgresql://localhost:5432/mydb" + username = "user" + password = "pass" + } + } + """ + } + + // Additional metadata + additionalSource("@orbital/config", "metadata.json") { + """ + { + "version": "1.0.0", + "author": "Development Team", + "description": "Person management schema" + } + """ + } +} +``` + +### Pair Syntax + +You can also use pair syntax for additional sources: + +```kotlin +taxiPublisher("http://localhost:9022", "com.example/myschema/1.0.0") { + taxi("types.taxi") { + """ + type PersonName inherits String + """ + } + + additionalSource("@orbital/config", "config.conf" to """ + some.setting = "value" + another.setting = 42 + """) +} +``` + +## Complete Example + +```kotlin +stack { + taxiPublisher("http://localhost:9022", "com.example/ecommerce/1.0.0") { + // Core types + taxi("types.taxi") { + """ + namespace com.example.ecommerce + + type ProductId inherits String + type ProductName inherits String + type Price inherits Decimal + type CustomerId inherits String + type OrderId inherits String + """ + } + + // Domain models + taxi("models.taxi") { + """ + namespace com.example.ecommerce + + model Product { + id: ProductId + name: ProductName + price: Price + inStock: Boolean + } + + model Customer { + id: CustomerId + name: String + email: String + } + + model Order { + id: OrderId + customerId: CustomerId + products: Product[] + total: Price + status: String + } + """ + } + + // Service definitions + taxi("services.taxi") { + """ + namespace com.example.ecommerce + + service ProductService { + operation getProduct(ProductId): Product + operation listProducts(): Product[] + } + + service OrderService { + operation createOrder(CustomerId, Product[]): Order + operation getOrder(OrderId): Order + } + """ + } + + // Configuration + additionalSource("@orbital/config", "database.conf") { + """ + database { + host = "localhost" + port = 5432 + name = "ecommerce" + } + """ + } + } +} +``` + +## Publishing Behavior + +When the stack starts: + +1. The Taxi schemas and additional sources are packaged together +2. The package is published to the specified Taxi server endpoint +3. The server endpoint typically expects a POST to `/api/schemas/taxi` +4. The published package includes metadata and all source files + +## Integration with Orbital + +The Taxi Publisher is designed to work with [Orbital](https://orbitalhq.com), a data integration platform. When publishing to an Orbital server: + +```kotlin +stack { + // Start an Orbital-compatible schema server + http(port = 9022) { + post("/api/schemas/taxi") { call -> + val content = call.receiveText() + println("Received Taxi schema package: $content") + call.respond(HttpStatusCode.OK) + } + } + + // Publish schemas to the server + taxiPublisher("http://localhost:9022", "com.mycompany/schemas/1.0.0") { + taxi("schema.taxi") { + """ + type UserId inherits String + model User { + id: UserId + name: String + } + """ + } + } +} +``` + +## Returned Configuration + +When a `taxiPublisher` component is declared, the following configuration is available: + +| Property | Description | +|----------|-------------| +| `url` | The target schema server URL | + +### Accessing Configuration + +```kotlin +val infra = stack { + taxiPublisher("http://localhost:9022", "com.example/test/1.0.0") { + taxi("types.taxi") { + "type TestType inherits String" + } + } +}.start() + +val config = infra.taxiPublisher.single().componentInfo!!.componentConfig +println("Published to: ${config.url}") +``` + +## Best Practices + +1. **Versioning**: Use semantic versioning in your package URI (e.g., `1.0.0`, `1.1.0`) +2. **Organization**: Separate schemas into logical files (types, models, services) +3. **Namespaces**: Use namespaces in Taxi schemas to avoid naming conflicts +4. **Documentation**: Include comments in your Taxi schemas to document types and models + +```kotlin +taxiPublisher("http://localhost:9022", "com.example/myschema/1.0.0") { + taxi("documented-types.taxi") { + """ + namespace com.example + + // Represents a unique identifier for a user + type UserId inherits String + + // A user in the system + model User { + // The unique identifier + id: UserId + + // The user's display name + name: String + + // When the user account was created + createdAt: Instant + } + """ + } +} +``` + +## Resources + +- [Taxi Language Documentation](https://taxilang.org/) +- [Orbital Documentation](https://docs.orbitalhq.com/)