FARP is a protocol specification library, NOT a complete gateway or service framework. This document clarifies what FARP provides versus what implementers (services and gateways) must build.
| Concern | FARP Library | Service Implementation | Gateway Implementation |
|---|---|---|---|
| Data Structures | ✅ Defines types | Uses types | Uses types |
| Schema Generation | ✅ Providers | Calls providers | - |
| Schema Merging | ✅ Merge logic | - | Calls merge logic |
| HTTP Endpoints | ✅ FARPHandler |
Mounts handler on router | - |
| Service Discovery | ✅ 6 backends + push | Uses ServiceNode |
Uses GatewayNode |
| Registry Backend | ✅ Memory + KV via backends | Auto via ServiceNode |
Auto via GatewayNode |
| Route Configuration | ✅ Schema-to-route conversion | - | Applies routes from callback |
| Health Monitoring | ✅ Auto health loop | Auto via ServiceNode |
Auto via GatewayNode |
| Webhook Transport | ❌ Types only | ✅ Must implement | ✅ Must implement |
-
Type System (
types.go)SchemaManifest,SchemaDescriptor,InstanceMetadata, etc.- Routing, authentication, and webhook configuration types
- JSON serialization/deserialization
-
Schema Providers (
providers/*)- OpenAPI, AsyncAPI, gRPC, GraphQL, oRPC, Thrift, Avro generators
- Extract schemas from application code/IDL files
- Return standardized schema format
-
Schema Merging (
merger/*)- Combine multiple service schemas into unified docs
- Conflict resolution strategies
- Support for all protocol types
-
Validation Logic (
manifest.go)- Ensure manifests are spec-compliant
- Checksum calculation and verification
- Version compatibility checks
-
Storage Abstractions (
registry.go,storage.go)SchemaRegistryandStorageBackendinterfaces- In-memory implementation for testing (
registry/memory) - KV-based backends via discovery backends (Consul, etcd, Redis)
-
Service Discovery (
discovery/*)ServiceDiscoveryinterface with 6 backend implementationsServiceNode— auto-lifecycle for services (register, health, schemas)GatewayNode— auto-lifecycle for gateways (discover, fetch, routes)FARPHandler— ready-to-mount HTTP handler for FARP endpoints- Push-based discovery — no external registry needed
-
Gateway Client (
gateway/*)- Schema-to-route conversion (OpenAPI, AsyncAPI, GraphQL)
- Route hash comparison to prevent unnecessary remounts
- Atomic route swap via
RouteUpdateHandlerinterface
With the discovery system, most integration is automatic via ServiceNode:
node, _ := discovery.NewServiceNode(discovery.ServiceNodeConfig{
ServiceName: "user-service",
Address: "10.0.0.5:8080",
Discovery: consulBackend,
})
node.Start(ctx)
http.Handle("/_farp/", node.HTTPHandler()) // mount on your routerFor manual integration (without ServiceNode):
-
HTTP Server Endpoints — or use
FARPHandlerGET /_farp/manifest- ReturnSchemaManifestJSONGET /_farp/health- Health check endpointGET /_farp/schemas/{type}- Return schema by type
-
Discovery Backend Integration — or use
ServiceNode- Register service with Consul/etcd/K8s/mDNS
- Store FARP manifest in backend metadata
- Handle TTL/heartbeats
-
Schema Generation Workflow
// Service framework (e.g., Forge) must: 1. Initialize FARP providers 2. Generate schemas from router 3. Create SchemaManifest 4. Expose via HTTP or store in registry 5. Register with discovery backend
-
Optional: Webhook Receivers
- Accept HTTP POST from gateway with events
- Validate signatures
- Handle event delivery
Example: The Forge framework would integrate FARP by calling providers during startup and exposing the manifest via HTTP handlers.
-
Service Discovery Client
- Watch Consul/etcd/K8s/mDNS for service registrations
- Extract FARP manifest from service metadata
- Handle service additions/removals/updates
-
HTTP Client
- Fetch schemas from
LocationTypeHTTPURLs - Handle timeouts, retries, TLS verification
- Parse and validate fetched schemas
- Fetch schemas from
-
Schema-to-Route Conversion
- Parse OpenAPI paths → HTTP routes
- Parse AsyncAPI channels → WebSocket routes
- Parse gRPC services → gRPC routes
- Apply routing strategies (mount at root, prefix with service name, etc.)
-
Route Configuration
- Apply routes to gateway-specific config (Kong, Traefik, Envoy, etc.)
- Handle route updates and removals
- Traffic splitting for multiple versions
-
Health Monitoring
- Poll service health endpoints
- Update routing based on health status
- Circuit breaker logic
-
Optional: Webhook Dispatching
- Send events to service webhook endpoints
- Retry with exponential backoff
- Track delivery status
Example: octopus-gateway (Rust) would watch mDNS for services, fetch FARP manifests, and configure its internal routing table.
The gateway/client.go package is a reference implementation/helper, NOT production-ready gateway code. It demonstrates:
- ✅ How to structure a gateway integration
- ✅ How to watch for manifest changes
- ✅ How to convert schemas to routes
- ✅ How to cache schemas
It does NOT provide:
- ❌ Complete HTTP client (HTTP fetch has
TODO) - ❌ Production-ready error handling
- ❌ Gateway-specific route application
- ❌ Health monitoring
- ❌ Load balancing logic
Real gateways should use it as a reference, not a dependency.
FARP is designed with clear boundaries:
- Protocol Core: Types, interfaces, validation (backend-agnostic)
- Schema Providers: Protocol-specific generators (OpenAPI, gRPC, etc.)
- Storage Interfaces: Abstract registry operations (implementations separate)
- Gateway Helpers: Reference integration examples (not production)
Every major component is pluggable via interfaces:
┌─────────────────────────────────────────┐
│ Application Layer │
│ (Services, Gateways, Tooling) │
└─────────────────┬───────────────────────┘
│
┌─────────────────▼───────────────────────┐
│ FARP Protocol Core │
│ - SchemaManifest types │
│ - SchemaProvider interface │
│ - SchemaRegistry interface │
│ - Validation & serialization │
└─────────────────┬───────────────────────┘
│
┌────────┴────────┐
│ │
┌────────▼─────┐ ┌───────▼──────────┐
│ Providers │ │ Registry Impls │
│ │ │ │
│ - OpenAPI │ │ - Consul │
│ - AsyncAPI │ │ - etcd │
│ - gRPC │ │ - Kubernetes │
│ - GraphQL │ │ - Redis │
│ - Custom │ │ - Memory │
└──────────────┘ └──────────────────┘
The core protocol has zero dependencies on:
- Discovery backend implementations
- Forge framework internals
- Gateway implementations
This allows:
- Use FARP with non-Forge services
- Integrate with any gateway (Kong, Traefik, Envoy, etc.)
- Swap backends without protocol changes
Design decisions prioritize production requirements:
- Checksums: Detect schema corruption, enable efficient change detection
- Versioning: Support blue-green deployments, gradual rollouts
- Size Limits: Prevent backend overload, force HTTP strategy for large schemas
- Rate Limiting: Prevent DoS via excessive updates
- Audit Logging: Track all schema changes for compliance
Responsibilities:
- Define canonical types (
SchemaManifest,SchemaDescriptor) - Define interfaces (
SchemaProvider,SchemaRegistry) - Provide validation and serialization utilities
- Calculate checksums
- Version compatibility checks
Dependencies: None (only Go stdlib)
Package Structure:
farp/
├── types.go # Core types (SchemaManifest, SchemaDescriptor, etc.)
├── manifest.go # Manifest operations (checksum, validation)
├── provider.go # SchemaProvider interface
├── registry.go # SchemaRegistry interface
├── storage.go # Storage abstraction
├── validation.go # Schema validation
├── checksum.go # Checksum calculation
├── version.go # Protocol version constants
└── errors.go # Error types
Responsibilities:
- Generate schemas from application code
- Implement
SchemaProviderinterface - Protocol-specific logic (OpenAPI path extraction, gRPC reflection, etc.)
Dependencies: Protocol Core + specific schema libraries
Package Structure:
farp/providers/
├── openapi/
│ ├── provider.go # OpenAPIProvider implementation
│ ├── generator.go # OpenAPI spec generation
│ └── validator.go # OpenAPI validation
├── asyncapi/
│ ├── provider.go # AsyncAPIProvider implementation
│ ├── generator.go # AsyncAPI spec generation
│ └── validator.go # AsyncAPI validation
├── grpc/
│ ├── provider.go # GRPCProvider implementation
│ ├── reflection.go # gRPC reflection client
│ └── protobuf.go # Protobuf parsing
└── graphql/
├── provider.go # GraphQLProvider implementation
└── introspection.go # GraphQL introspection query
Responsibilities:
- Store/retrieve manifests and schemas
- Implement
SchemaRegistryinterface - Backend-specific optimizations
Dependencies: Protocol Core + backend client libraries
Package Structure:
farp/registry/
├── consul/
│ ├── registry.go # Consul implementation
│ ├── watcher.go # Consul watch support
│ └── config.go # Consul-specific config
├── etcd/
│ ├── registry.go # etcd implementation
│ ├── watcher.go # etcd watch support
│ └── config.go # etcd-specific config
├── kubernetes/
│ ├── registry.go # K8s ConfigMap implementation
│ └── watcher.go # K8s watch support
├── redis/
│ ├── registry.go # Redis implementation
│ └── pubsub.go # Redis pub/sub for changes
└── memory/
└── registry.go # In-memory (for testing)
Responsibilities:
- Watch for schema changes
- Fetch schemas
- Convert schemas to gateway-specific route configs
- Reference implementation for gateway developers
Dependencies: Protocol Core + Registry
Package Structure:
farp/gateway/
├── client.go # Gateway client (watches manifests)
├── watcher.go # Change notification handler
├── fetcher.go # Schema fetcher (HTTP + registry)
├── converter.go # Schema → Route conversion
├── cache.go # Local schema cache
└── examples/
├── kong.go # Kong gateway integration example
├── traefik.go # Traefik integration example
└── envoy.go # Envoy xDS integration example
┌─────────────────┐
│ Forge App │
│ Startup │
└────────┬────────┘
│
│ 1. Initialize router
▼
┌─────────────────┐
│ Schema │
│ Providers │ 2. Generate schemas (OpenAPI, AsyncAPI)
└────────┬────────┘
│
│ 3. Create manifest
▼
┌─────────────────┐
│ Manifest │
│ Builder │ 4. Calculate checksums
└────────┬────────┘
│
│ 5. Publish schemas (if registry strategy)
▼
┌─────────────────┐
│ Schema │
│ Registry │ 6. Store in backend (Consul, etcd, etc.)
└────────┬────────┘
│
│ 7. Register service instance + manifest
▼
┌─────────────────┐
│ Discovery │
│ Backend │
└─────────────────┘
┌─────────────────┐
│ API Gateway │
│ Startup │
└────────┬────────┘
│
│ 1. Connect to discovery backend
▼
┌─────────────────┐
│ Discovery │
│ Watch │ 2. Subscribe to service registrations
└────────┬────────┘
│
│ 3. New service registered → event
▼
┌─────────────────┐
│ Manifest │
│ Fetcher │ 4. Fetch SchemaManifest from instance metadata
└────────┬────────┘
│
│ 5. For each schema in manifest
▼
┌─────────────────┐
│ Schema │
│ Fetcher │ 6. Fetch schema (registry or HTTP)
└────────┬────────┘
│
│ 7. Validate checksum
▼
┌─────────────────┐
│ Schema │
│ Converter │ 8. Convert OpenAPI → HTTP routes
└────────┬────────┘ AsyncAPI → WebSocket routes
│
│ 9. Configure gateway routes
▼
┌─────────────────┐
│ Gateway │
│ Route Table │
└─────────────────┘
┌─────────────────┐
│ Service │
│ Hot Reload │
└────────┬────────┘
│
│ 1. Routes changed
▼
┌─────────────────┐
│ Schema │
│ Providers │ 2. Regenerate schemas
└────────┬────────┘
│
│ 3. Calculate new checksums
▼
┌─────────────────┐
│ Checksum │
│ Comparator │ 4. Compare with previous
└────────┬────────┘
│
│ 5. If changed
▼
┌─────────────────┐
│ Schema │
│ Registry │ 6. Update schemas in backend
└────────┬────────┘
│
│ 7. Update manifest (new checksum + timestamp)
▼
┌─────────────────┐
│ Discovery │
│ Backend │ 8. Trigger change notification
└────────┬────────┘
│
│ 9. Gateway watch detects change
▼
┌─────────────────┐
│ Gateway │
│ Reconfigure │ 10. Fetch updated schemas, reconfigure routes
└─────────────────┘
Flow:
- Service publishes schemas to backend KV store
- Service registers with manifest pointing to registry paths
- Gateway fetches schemas from backend
Advantages:
- Schemas persist even if service dies
- Fast gateway startup (no service polling)
- Centralized schema storage
- Backend handles high availability
Configuration:
registry := consul.NewSchemaRegistry(consulClient)
registry.PublishSchema(ctx, "/schemas/user-service/v1/openapi", openAPISpec)
manifest := &farp.SchemaManifest{
Schemas: []farp.SchemaDescriptor{{
Type: farp.SchemaTypeOpenAPI,
Location: farp.SchemaLocation{
Type: farp.LocationTypeRegistry,
RegistryPath: "/schemas/user-service/v1/openapi",
},
}},
}Flow:
- Service serves schemas via HTTP endpoints
- Service registers with manifest pointing to HTTP URLs
- Gateway fetches schemas directly from service
Advantages:
- No backend storage required
- Service controls schema freshness
- Simpler deployment
Configuration:
manifest := &farp.SchemaManifest{
Schemas: []farp.SchemaDescriptor{{
Type: farp.SchemaTypeOpenAPI,
Location: farp.SchemaLocation{
Type: farp.LocationTypeHTTP,
URL: "http://user-service:8080/openapi.json",
},
}},
}Flow:
- Service publishes schemas to backend
- Service also serves schemas via HTTP
- Gateway tries registry first, falls back to HTTP
Advantages:
- High availability (two sources)
- Works even if backend is down
- Self-healing
Configuration:
manifest := &farp.SchemaManifest{
Schemas: []farp.SchemaDescriptor{{
Type: farp.SchemaTypeOpenAPI,
Location: farp.SchemaLocation{
Type: farp.LocationTypeRegistry,
RegistryPath: "/schemas/user-service/v1/openapi",
},
}},
Endpoints: farp.SchemaEndpoints{
OpenAPI: "/openapi.json", // Fallback HTTP endpoint
},
}Step 1: Deploy v2 with new schema
┌──────────────┐ ┌──────────────┐
│ Service v1 │ │ Service v2 │
│ (100%) │ │ (0%) │
└──────┬───────┘ └──────┬───────┘
│ │
│ v1 manifest │ v2 manifest
▼ ▼
┌───────────────────────────┐
│ Gateway │
│ - Registers v2 routes │
│ - Keeps v1 routes active │
└───────────────────────────┘
Step 2: Shift traffic gradually
┌──────────────┐ ┌──────────────┐
│ Service v1 │ │ Service v2 │
│ (90%) │ │ (10%) │
└──────┬───────┘ └──────┬───────┘
▼ ▼
┌───────────────────────────┐
│ Gateway │
│ - Traffic split 90/10 │
└───────────────────────────┘
Step 3: Complete migration
┌──────────────┐ ┌──────────────┐
│ Service v1 │ │ Service v2 │
│ (0%) │ │ (100%) │
└──────────────┘ └──────┬───────┘
▼
┌───────────────────────────┐
│ Gateway │
│ - Removes v1 routes │
│ - v2 routes active │
└───────────────────────────┘
Similar to blue-green, but with smaller traffic percentages (1%, 5%, 10%, etc.)
Initial state: 3 instances v1
┌──────┐ ┌──────┐ ┌──────┐
│ v1-1 │ │ v1-2 │ │ v1-3 │
└──┬───┘ └──┬───┘ └──┬───┘
└────────┴────────┘
│
┌────▼────┐
│ Gateway │ (All use v1 schema)
└─────────┘
Step 1: Update instance 1
┌──────┐ ┌──────┐ ┌──────┐
│ v2-1 │ │ v1-2 │ │ v1-3 │
└──┬───┘ └──┬───┘ └──┬───┘
└────────┴────────┘
│
┌────▼────┐
│ Gateway │ (Supports v1 + v2 schemas)
└─────────┘
Step 2: Update instance 2
┌──────┐ ┌──────┐ ┌──────┐
│ v2-1 │ │ v2-2 │ │ v1-3 │
└──┬───┘ └──┬───┘ └──┬───┘
└────────┴────────┘
│
┌────▼────┐
│ Gateway │ (Majority on v2)
└─────────┘
Step 3: Update instance 3
┌──────┐ ┌──────┐ ┌──────┐
│ v2-1 │ │ v2-2 │ │ v2-3 │
└──┬───┘ └──┬───┘ └──┬───┘
└────────┴────────┘
│
┌────▼────┐
│ Gateway │ (All v2, remove v1 routes)
└─────────┘
Gateway maintains local cache to avoid repeated fetches:
type SchemaCache struct {
mu sync.RWMutex
schemas map[string]CachedSchema // key: hash
ttl time.Duration
}
type CachedSchema struct {
Schema interface{}
FetchedAt time.Time
AccessedAt time.Time
}
// Fetch with cache
func (g *Gateway) GetSchema(descriptor SchemaDescriptor) (interface{}, error) {
// Check cache by hash
if cached, ok := g.cache.Get(descriptor.Hash); ok {
return cached.Schema, nil
}
// Cache miss, fetch from source
schema, err := g.fetchSchema(descriptor)
if err != nil {
return nil, err
}
// Store in cache
g.cache.Set(descriptor.Hash, schema)
return schema, nil
}Use backend-native watch mechanisms:
| Backend | Mechanism | Efficiency |
|---|---|---|
| Consul | Blocking queries (long polling) | High |
| etcd | gRPC streaming | Very high |
| Kubernetes | Watch API (HTTP streaming) | High |
| Redis | Pub/Sub | High |
Group schema publishes:
func (r *Registry) PublishManifest(ctx context.Context, manifest *SchemaManifest) error {
// Batch all schema publishes in a transaction
txn := r.backend.Transaction()
for _, schema := range manifest.Schemas {
if schema.Location.Type == LocationTypeRegistry {
txn.Put(schema.Location.RegistryPath, schema.Data)
}
}
// Single commit
return txn.Commit(ctx)
}Compress large schemas:
func compressSchema(data []byte) ([]byte, error) {
if len(data) < 1024 {
return data, nil // Don't compress small schemas
}
var buf bytes.Buffer
gzipWriter := gzip.NewWriter(&buf)
gzipWriter.Write(data)
gzipWriter.Close()
return buf.Bytes(), nil
}func (g *Gateway) handleSchemaFetchError(descriptor SchemaDescriptor, err error) {
// Try fallback locations
if fallback := g.getFallbackLocation(descriptor); fallback != nil {
schema, err := g.fetchFromLocation(fallback)
if err == nil {
return
}
}
// Use cached schema if available
if cached, ok := g.cache.Get(descriptor.Hash); ok {
logger.Warn("using stale cached schema", "age", time.Since(cached.FetchedAt))
return
}
// Mark service as degraded, continue with existing routes
g.markServiceDegraded(descriptor.ServiceName)
}func (r *Registry) RegisterManifest(ctx context.Context, manifest *SchemaManifest) error {
// Retry with exponential backoff
backoff := retry.NewExponential(time.Second)
for i := 0; i < 5; i++ {
err := r.backend.Put(ctx, key, data)
if err == nil {
return nil
}
if !isRetryable(err) {
return err
}
time.Sleep(backoff.Next())
}
// Store locally for eventual consistency
r.pendingQueue.Add(manifest)
go r.retryPendingManifests()
return nil
}func (g *Gateway) processSchema(descriptor SchemaDescriptor) error {
schema, err := g.fetchSchema(descriptor)
if err != nil {
return err
}
// Validate schema format
if err := validateSchema(schema, descriptor.Type); err != nil {
// Log error but don't fail
logger.Error("invalid schema received",
"service", descriptor.ServiceName,
"type", descriptor.Type,
"error", err,
)
// Use last known good schema
return g.useLastKnownGoodSchema(descriptor)
}
return nil
}// Service-side metrics
farp_manifest_publish_total{service="user-service",status="success"}
farp_manifest_publish_duration_seconds{service="user-service"}
farp_schema_size_bytes{service="user-service",type="openapi"}
// Gateway-side metrics
farp_manifest_watch_events_total{service="user-service"}
farp_schema_fetch_total{service="user-service",type="openapi",status="success"}
farp_schema_fetch_duration_seconds{service="user-service",type="openapi"}
farp_schema_cache_hit_ratio{service="user-service"}
farp_route_updates_total{service="user-service",action="add"}// Structured logging with context
logger.Info("schema manifest registered",
"service", manifest.ServiceName,
"version", manifest.ServiceVersion,
"instance_id", manifest.InstanceID,
"schemas", len(manifest.Schemas),
"checksum", manifest.Checksum,
"size_bytes", len(manifestJSON),
)
logger.Info("gateway routes configured",
"service", manifest.ServiceName,
"routes_added", len(newRoutes),
"routes_updated", len(updatedRoutes),
"routes_removed", len(removedRoutes),
"duration_ms", time.Since(start).Milliseconds(),
)Use OpenTelemetry for distributed tracing:
ctx, span := tracer.Start(ctx, "farp.registry.publish_manifest")
defer span.End()
span.SetAttributes(
attribute.String("service", manifest.ServiceName),
attribute.String("version", manifest.ServiceVersion),
attribute.Int("schemas", len(manifest.Schemas)),
)- Manifest validation
- Checksum calculation
- Schema serialization
- Location resolution
- Registry operations with real backends (test containers)
- Schema fetch with HTTP mock servers
- Watch notifications
- Full flow: service startup → gateway discovery → route configuration
- Schema updates and change propagation
- Failure scenarios (backend down, schema fetch timeout)
- Schema registration latency
- Watch notification latency
- Gateway startup time with 100+ services
- Cache hit ratio under load
Design philosophy: Simple things simple, complex things possible, production things robust.