diff --git a/docs/cli/grpc-service/generate.mdx b/docs/cli/grpc-service/generate.mdx index 9a782a0c..07f45b62 100644 --- a/docs/cli/grpc-service/generate.mdx +++ b/docs/cli/grpc-service/generate.mdx @@ -10,6 +10,10 @@ description: "Generate a protobuf definition for a gRPC service from a GraphQL s The `generate` command generates a protobuf definition and mapping file for a gRPC service from a GraphQL schema, which can be used to implement a gRPC service and can be used for the composition. +It supports two modes: +1. **Schema Mode**: Generates a protobuf definition that mirrors the GraphQL schema. Used for implementing Subgraphs in gRPC (Connect Backend). +2. **Operations Mode**: Generates a protobuf definition based on a set of GraphQL operations. Used for generating client SDKs (Connect Client). + ## Usage ```bash @@ -30,6 +34,11 @@ wgc grpc-service generate [options] [service-name] | `-o, --output ` | The output directory for the protobuf schema | `.` | | `-p, --package-name ` | The name of the proto package | `service.v1` | | `-g, --go-package ` | Adds an `option go_package` to the proto file | None | +| `-w, --with-operations ` | Path to directory containing GraphQL operation files. Enables **Operations Mode**. | None | +| `-l, --proto-lock ` | Path to existing proto lock file. | `/service.proto.lock.json` | +| `--custom-scalar-mapping ` | Custom scalar type mappings as JSON string. Example: `{"DateTime":"google.protobuf.Timestamp"}` | None | +| `--custom-scalar-mapping-file ` | Path to JSON file containing custom scalar type mappings. | None | +| `--max-depth ` | Maximum recursion depth for processing nested selections. | `50` | ## Description @@ -37,12 +46,23 @@ This command generates a protobuf definition for a gRPC service from a GraphQL s ## Examples -### Generate a protobuf definition for a gRPC service from a GraphQL schema +### Generate a protobuf definition for a gRPC service from a GraphQL schema (Schema Mode) ```bash wgc grpc-service generate -i ./schema.graphql -o ./service MyService ``` +### Generate a protobuf definition from operations (Operations Mode) + +```bash +wgc grpc-service generate \ + -i ./schema.graphql \ + -o ./gen \ + --with-operations ./operations \ + --package-name my.service.v1 \ + MyService +``` + ### Define a custom package name ```bash @@ -60,10 +80,10 @@ wgc grpc-service generate -i ./schema.graphql -o ./service MyService --go-packag The command generates multiple files in the output directory: - `service.proto`: The protobuf definition for the gRPC service -- `service.mapping.json`: The mapping file for the gRPC service +- `service.mapping.json`: The mapping file for the gRPC service (Schema Mode only) - `service.proto.lock.json`: The lock file for the protobuf definition -The generated protobuf definition can be used to implement a gRPC service in any language that supports protobuf. +The generated protobuf definition can be used to implement a gRPC service in any language that supports protobuf, or to generate client SDKs. The mapping and the protobuf definition is needed for the composition part. diff --git a/docs/connect/client.mdx b/docs/connect/client.mdx new file mode 100644 index 00000000..9fe0d1c8 --- /dev/null +++ b/docs/connect/client.mdx @@ -0,0 +1,344 @@ +--- +title: "Connect Client" +description: "Generate type-safe clients and OpenAPI specs from GraphQL operations" +icon: "code" +--- + +# Connect Client + + + **Alpha Feature**: The Connect Client capability is currently in alpha. APIs and functionality may change as we gather feedback. + + +Connect Client enables you to generate type-safe gRPC/Connect clients and OpenAPI specifications directly from your GraphQL operations. This allows you to consume your Federated (or monolithic) Graph using standard gRPC tooling in any language, or expose REST APIs via OpenAPI without writing manual adapters. + +## Overview + +While **Connect gRPC Services** (gRPC Subgraphs) focuses on implementing subgraphs using gRPC, **Connect Client** focuses on the consumer side. It allows you to define GraphQL operations (Queries, Mutations and soon Subscriptions) and compile them into a Protobuf service definition. + +The Cosmo Router acts as a bridge. It serves your generated operations via the Connect protocol, executes them against your Federated Graph, and maps the GraphQL response back to typed Protobuf messages. + +## Workflow + +The typical workflow involves defining your operations, configuring the Router to serve them, and then distributing the Protobuf or OpenAPI definition to consumers who generate their client SDKs. + +```mermaid +sequenceDiagram + autonumber + participant Provider as API Provider + participant CLI as Cosmo CLI (wgc) + participant Consumer as API Consumer + participant Client as Client SDK + participant Router as Cosmo Router + participant Graph as Federated Graph + + Note over Provider, Router: Setup & Configuration + Provider->>Provider: Define GraphQL Operations (.graphql) + Provider->>CLI: Generate Protobuf (wgc grpc-service generate) + CLI->>Provider: service.proto + lock file + Provider->>Router: Configure & Start Router with operations + + Note over Provider, Consumer: Distribution + Provider->>Consumer: Distribute service.proto / OpenAPI spec + + Note over Consumer, Client: Client Development + Consumer->>Consumer: Generate Client SDK (buf/protoc) + Consumer->>Client: Integrate SDK into App + + Note over Client, Graph: Runtime + Client->>Router: Send RPC Request (Connect/gRPC) + Router->>Graph: Execute GraphQL Operation + Graph-->>Router: GraphQL Response + Router-->>Client: Protobuf Response + + Note over Provider, Router: Observe + Router->>Provider: OTEL Metrics / Traces (GraphQL & RPC) +``` + +## Usage Example + +### 1. Define GraphQL Operations + +Create a directory for your operations, e.g., `services/`: + +```graphql services/GetEmployee.graphql +query GetEmployee($id: ID!) { + employee(id: $id) { + id + details { + forename + surname + } + } +} +``` + +### 2. Generate Protobuf + +Run the `wgc grpc-service generate` command with the `--with-operations` flag. You must also provide the schema SDL to validate the operations. + + + Each collection of operations represents a distinct Protobuf service. You can organize your operations into different directories (packages) to create multiple services, giving you the flexibility to expose specific subsets of your graph to different consumers or applications. + + + + It is recommended to output the generated proto file to the same directory as your operations to keep them together. + + +```bash +wgc grpc-service generate \ + --input schema.graphql \ + --output ./services \ + --with-operations ./services \ + --package-name "myorg.employee.v1" \ + MyService +``` + +This command generates a `service.proto` file and a `service.proto.lock.json` file in the `./services` directory. + +### 3. Configure and Start Router + +Enable the ConnectRPC server in your `config.yaml` and point the `services` provider to the directory containing your generated `service.proto`. + +```yaml config.yaml +# ConnectRPC configuration +connect_rpc: + enabled: true + server: + listen_addr: "0.0.0.0:8081" + services_provider_id: "fs-services" + +# Storage providers for services directory +storage_providers: + file_system: + - id: "fs-services" + # Path to the directory containing your generated service.proto and operations + path: "./services" +``` + +Start the router. It is now ready to accept requests for the operations defined in `service.proto`. + +### 4. Generate Client SDK + +Use [buf](https://buf.build/) or `protoc` to generate the client code for your application. + +Example `buf.gen.yaml` for Go: + +```yaml buf.gen.yaml +version: v2 +managed: + enabled: true + override: + - file_option: go_package_prefix + value: github.com/wundergraph/cosmo/router-tests/testdata/connectrpc/client +plugins: + - remote: buf.build/protocolbuffers/go + out: client + opt: + - paths=source_relative + - remote: buf.build/connectrpc/go + out: client + opt: + - paths=source_relative +``` + +Run the generation: + +```bash +buf generate services/service.proto +``` + +### 5. Use the Client + +You can now use the generated client to call your GraphQL API via the Router. The Router acts as the server implementation for your generated service. + +```go +package main + +import ( + "context" + "net/http" + "log" + + "connectrpc.com/connect" + employeev1 "example/gen/go/myorg/employee/v1" + "example/gen/go/myorg/employee/v1/employeev1connect" +) + +func main() { + // Point to your Cosmo Router's ConnectRPC listener + client := employeev1connect.NewMyServiceClient( + http.DefaultClient, + "http://localhost:8081", + ) + + req := connect.NewRequest(&employeev1.GetEmployeeRequest{ + Id: "1", + }) + + resp, err := client.GetEmployee(context.Background(), req) + if err != nil { + log.Fatal(err) + } + + log.Printf("Employee: %s %s", + resp.Msg.Employee.Details.Forename, + resp.Msg.Employee.Details.Surname, + ) +} +``` + +## Directory Structure & Organization + +The Cosmo Router uses a convention-based directory structure to automatically discover and load Connect RPC services. This approach co-locates proto files with their GraphQL operations for easy management. + +### Configuration + +Configure the router to point to your root services directory using a storage provider: + +```yaml config.yaml +connect_rpc: + enabled: true + server: + listen_addr: "0.0.0.0:8081" + services_provider_id: "fs-services" + +storage_providers: + file_system: + - id: "fs-services" + path: "./services" # Root services directory +``` + +The router will recursively walk the `services` directory and automatically discover all proto files and their associated GraphQL operations. + +### Recommended Structure + +For organization purposes, we recommend keeping all services in a root `services` directory, with subdirectories for packages and individual services. + +#### Single Service per Package + +When you have one service per package, you can organize files directly in the package directory: + +``` +services/ +└── employee.v1/ # Package directory + ├── employee.proto # Proto definition + ├── employee.proto.lock.json + ├── GetEmployee.graphql # Operation files + ├── UpdateEmployee.graphql + └── DeleteEmployee.graphql +``` + +Or with operations in a subdirectory: + +``` +services/ +└── employee.v1/ + ├── employee.proto + ├── employee.proto.lock.json + └── operations/ + ├── GetEmployee.graphql + ├── UpdateEmployee.graphql + └── DeleteEmployee.graphql +``` + +#### Multiple Services per Package + +When multiple services share the same proto package, organize them in service subdirectories: + +``` +services/ +└── company.v1/ # Package directory + ├── EmployeeService/ # First service + │ ├── employee.proto # package company.v1; service EmployeeService + │ ├── employee.proto.lock.json + │ └── operations/ + │ ├── GetEmployee.graphql + │ └── UpdateEmployee.graphql + └── DepartmentService/ # Second service, same package + ├── department.proto # package company.v1; service DepartmentService + ├── department.proto.lock.json + └── operations/ + ├── GetDepartment.graphql + └── ListDepartments.graphql +``` + +### Flexible Organization + +The router determines service identity by proto package declarations, not directory names. This gives you flexibility in organizing your files: + +``` +services/ +├── hr-services/ +│ ├── employee.proto # package company.v1; service EmployeeService +│ └── GetEmployee.graphql +└── admin-services/ + ├── department.proto # package company.v1; service DepartmentService + └── GetDepartment.graphql +``` + + + **Service Uniqueness**: The combination of proto package name and service name must be unique. For example, you can have multiple services with the same package name (e.g., `company.v1`) as long as they have different service names (`EmployeeService`, `DepartmentService`). The router uses the package + service combination to identify and route requests. + + +### Discovery Rules + +The router follows these rules when discovering services: + +1. **Recursive Discovery**: The router recursively walks the services directory to find all `.proto` files +2. **Proto Association**: Each `.proto` file discovered becomes a service endpoint +3. **Operation Association**: All `.graphql` files in the same directory (or subdirectories) are associated with the nearest parent `.proto` file +4. **Nested Proto Limitation**: If a `.proto` file is found in a directory, any `.proto` files in subdirectories are **not** discovered (the parent proto takes precedence) + +#### Example: Nested Proto Files + +``` +services/ +└── employee.v1/ + ├── employee.proto # ✅ Discovered as a service + ├── GetEmployee.graphql # ✅ Associated with employee.proto + └── nested/ + ├── other.proto # ❌ NOT discovered (parent proto found first) + └── UpdateEmployee.graphql # ✅ Still associated with employee.proto +``` + + + **Avoid Nested Proto Files**: Do not place `.proto` files in subdirectories of a directory that already contains a `.proto` file. The nested proto files will not be discovered by the router. + + +### Best Practices + +1. **Use Semantic Versioning**: Include version numbers in package names (e.g., `employee.v1`, `employee.v2`) to support API evolution +2. **Co-locate Operations**: Keep GraphQL operations close to their proto definitions for easier maintenance +3. **Consistent Naming**: Use clear, descriptive names for packages and services that reflect their purpose +4. **Lock File Management**: Always commit `.proto.lock.json` files to version control to maintain field number stability + +## Observability + +The Cosmo Router provides built-in [observability features](/router/metrics-and-monitoring) that work seamlessly with Connect Client. Because the Router translates RPC calls into GraphQL operations, you get detailed metrics and tracing for both layers. + +- **GraphQL Metrics**: Track the performance, error rates, and usage of your underlying GraphQL operations (`GetEmployee`, etc.). +- **Request Tracing**: Trace the entire flow from the incoming RPC request, through the GraphQL engine, to your subgraphs and back. +- **Standard Protocols**: Export data using OpenTelemetry (OTLP) or Prometheus to your existing monitoring stack (Grafana, Datadog, Cosmo Cloud, etc.). + +Since the Router is aware of the operation mapping, it can attribute metrics correctly to the specific GraphQL operation being executed, giving you full visibility into your client's usage patterns. + +## Forward Compatibility & Lock Files + +When you generate your Protobuf definition, the CLI creates a `service.proto.lock.json` file. **You should commit this file to your version control system.** + +This lock file maintains a history of your operations and their field assignments. When you modify your operations (e.g., add new fields, reorder fields, or add new operations), the CLI uses the lock file to ensure: + +1. **Stable Field Numbers**: Existing fields retain their unique Protobuf field numbers, even if you reorder them in the GraphQL query. +2. **Safe Evolution**: You can safely evolve your client requirements without breaking existing clients. + +This mechanism allows you to iterate on your GraphQL operations—adding data requirements or new features—while maintaining binary compatibility for deployed clients. + +## Roadmap + +The following features are planned for future releases of Connect Client: + +1. **OpenAPI Generation**: Enhanced support for generating OpenAPI specifications including descriptions, summaries, deprecated fields, and tags. +2. **Subscription Support**: Ability to consume GraphQL subscriptions as gRPC streams, enabling real-time data updates over the Connect protocol. +3. **Multiple Root Fields**: Support for executable operations containing multiple root selection set fields, allowing more complex queries in a single operation. +4. **Field Aliases**: Support for GraphQL aliases to control the shape of the API surface, enabling customized field names in the generated Protobuf definitions. diff --git a/docs/connect/overview.mdx b/docs/connect/overview.mdx index a4e0a954..292186cc 100644 --- a/docs/connect/overview.mdx +++ b/docs/connect/overview.mdx @@ -12,27 +12,36 @@ One of the biggest downsides of Apollo Federation is that backend developers mus How does this work? You define an Apollo-compatible Subgraph Schema, compile it into a protobuf definition, and implement it in your favorite gRPC stack, such as Go, Java, C#, or many others. No specific framework or GraphQL knowledge is required. It is really just gRPC! -## Key Benefits +## Key Capabilities -* **All Cosmo platform benefits** — including breaking change detection, centralized telemetry, and governance out of the box -* **Federation without GraphQL servers** — backend teams implement gRPC contracts instead of GraphQL resolvers -* **Language flexibility** — leverage gRPC code generation across nearly all ecosystems, including those with poor GraphQL server libraries -* **Reduced migration effort** — wrap existing APIs (like REST or SOAP) without writing full subgraphs, lowering the cost of moving from monoliths to federation -* **Developer experience** — straightforward request/response semantics, with the router handling GraphQL query planning and batching +### Connect gRPC Services (gRPC Subgraphs) + +Implement Federated Subgraphs using gRPC instead of GraphQL resolvers. +- **No GraphQL servers required**: Backend teams implement standard gRPC services. +- **Language flexibility**: Use any language with gRPC support (Go, Java, Rust, C#, etc.). +- **Reduced complexity**: The Router handles query planning; your service handles simple RPCs. + +### Connect Client (Typed Clients) + +Generate type-safe clients from your GraphQL operations. +- **Type Safety**: Generate SDKs for iOS, Android, Web, and Backend services. +- **OpenAPI**: Generate OpenAPI specs from your GraphQL operations. +- **Performance**: Use the efficient Connect/gRPC protocol to talk to your GraphQL API. ## Deployment Models ```mermaid graph LR - client["Clients (Web / Mobile / Server)"] --> routerCore["Router Core"] + client["GraphQL Clients
(Web / Mobile)"] --> routerCore["Router Core"] + connectClient["Connect Clients
(Generated SDKs)"] --> routerCore subgraph routerBox["Cosmo Router"] routerCore - plugin["Router Plugin
(Cosmo Connect)"] + plugin["Router Plugin
(Connect gRPC Service)"] end routerCore --> subA["GraphQL Subgraph"] - routerCore --> grpcSvc["gRPC Service
(Cosmo Connect)"] + routerCore --> grpcSvc["gRPC Service
(Connect gRPC Service)"] grpcSvc --> restA["REST / HTTP APIs"] grpcSvc --> soapA["Databases"] @@ -43,21 +52,26 @@ graph LR %% Styling classDef grpcFill fill:#ea4899,stroke:#ea4899,stroke-width:1.5px,color:#ffffff; classDef pluginFill fill:#ea4899,stroke:#ea4899,stroke-width:1.5px,color:#ffffff; + classDef clientFill fill:#3b82f6,stroke:#3b82f6,stroke-width:1.5px,color:#ffffff; + class grpcSvc grpcFill; class plugin pluginFill; + class connectClient clientFill; ``` -Cosmo Connect supports two ways to integrate gRPC into your federated graph: +Cosmo Connect supports three main integration patterns: -- **[Router Plugins](/connect/plugins)** — run as local processes managed by the router. Ideal for simple deployments where you want the lowest latency and do not need separate CI/CD or scaling. -- **[gRPC Services](/connect/grpc-services)** — independent deployments in any language. Suitable when you need full lifecycle control, team ownership boundaries, and independent scaling. +1. **[Connect Client](/connect/client)** — Generated clients that speak the Connect protocol to the Router. +2. **[Router Plugins](/connect/plugins)** — gRPC services running as local processes managed by the router. +3. **[gRPC Services](/connect/grpc-services)** — Independent gRPC services implementing subgraphs. -Both approaches remove the need to build GraphQL servers while maintaining the benefits of federation. +Both plugin and service approaches remove the need to build GraphQL servers while maintaining the benefits of federation. Connect Client removes the need to manually write GraphQL queries in your application code. ## Implementation Docs The following documentation explains how to build and deploy services and plugins: +- **[Connect Client](/connect/client)** — Generate type-safe clients and OpenAPI specs from GraphQL operations. - **[Router Plugins](/router/gRPC/plugins)** — Documentation for developing, configuring, and deploying plugins that run inside the router - **[gRPC Services](/router/gRPC/grpc-services)** — Documentation for the complete lifecycle of building, deploying, and managing independent gRPC services diff --git a/docs/docs.json b/docs/docs.json index 2abfffc7..14ffb877 100644 --- a/docs/docs.json +++ b/docs/docs.json @@ -68,6 +68,7 @@ "group": "Cosmo Connect", "pages": [ "connect/overview", + "connect/client", "connect/plugins", "connect/grpc-services" ]