📋 Prerequisites
📝 Feature Summary
When a kagent Agent CR has spec.declarative.a2aConfig populated, set appProtocol: kgateway.dev/a2a on the agent's Service port so agentgateway automatically switches to A2A protocol handling — same way it already does for MCP backends marked with appProtocol: kgateway.dev/mcp.
❓ Problem Statement / Motivation
agentgateway's documented Kubernetes integration for A2A (docs) relies on the Service appProtocol marker:
"Notice that the Service uses the appProtocol: kgateway.dev/a2a setting. This way, agentgateway configures the agentgateway proxy to use the A2A protocol."
Without this marker, the proxy treats traffic as opaque HTTP and skips A2A-specific behaviors like agent-card url rewriting. That breaks the round-trip: a card fetched through the gateway points back at the in-cluster Service hostname, which external A2A clients can't reach.
The kagent operator (go/core/internal/controller/translator/agent/conversion.go) already reads appProtocol to find the MCP port on referenced Services, but does not write any appProtocol value on the agent's own generated Service. So the obvious fix — patch the Service in-place — gets reverted on the next reconcile loop within ~10s (verified empirically on v0.8.0).
Today the only workaround is a sibling "wrapper" Service per agent in a separate manifest, with a duplicated pod selector. Six agents in our deployment ⇒ six wrapper Services. Two boilerplate Services per A2A agent is a lot of Kubernetes for what should be a one-liner in the operator's translator.
💡 Proposed Solution
In internal/controller/translator/agent/conversion.go (or wherever the agent Service is generated), set the appProtocol field on the agent port when the source Agent CR has spec.declarative.a2aConfig non-nil:
agentPort := corev1.ServicePort{
Name: "http",
Port: 8080,
TargetPort: intstr.FromInt(8080),
Protocol: corev1.ProtocolTCP,
AppProtocol: ptr.To("kgateway.dev/a2a"), // ← new, when a2aConfig is set
}
A bare http Agent (no A2A) would keep the field unset, exactly as today.
🔄 Alternatives Considered
- In-place
oc patch of the operator-managed Service — reverted by the controller within seconds.
- Wrapper Services in user manifests — works but doubles Service count for every A2A agent.
- A separate
Service field on the Agent CR allowing arbitrary spec overrides — more flexible but heavier; the appProtocol case is mechanical and doesn't need configurability.
- Mutating webhook — fights the operator on every reconcile; brittle.
Additional Context
- Reproduced on kagent v0.8.0, agentgateway v1.1.0, OpenShift 4.x.
oc explain agentgatewaybackends.agentgateway.dev.spec confirms agentgateway has no dedicated a2a backend type yet — appProtocol on the Service is the only documented integration mechanism for A2A in K8s.
- Same pattern is already in production for MCP (
appProtocol: kgateway.dev/mcp on argocd-mcp-server, db2-luw-mcp-server, etc.).
- Code reference:
go/core/internal/controller/translator/agent/conversion.go already has the surrounding shape — it reads svc.Spec.Ports[].AppProtocol for MCP detection but never writes it back on agent services.
Happy to send a PR if there's interest.
📋 Prerequisites
📝 Feature Summary
When a kagent
AgentCR hasspec.declarative.a2aConfigpopulated, setappProtocol: kgateway.dev/a2aon the agent's Service port so agentgateway automatically switches to A2A protocol handling — same way it already does for MCP backends marked withappProtocol: kgateway.dev/mcp.❓ Problem Statement / Motivation
agentgateway's documented Kubernetes integration for A2A (docs) relies on the Service
appProtocolmarker:Without this marker, the proxy treats traffic as opaque HTTP and skips A2A-specific behaviors like agent-card
urlrewriting. That breaks the round-trip: a card fetched through the gateway points back at the in-cluster Service hostname, which external A2A clients can't reach.The kagent operator (
go/core/internal/controller/translator/agent/conversion.go) already readsappProtocolto find the MCP port on referenced Services, but does not write anyappProtocolvalue on the agent's own generated Service. So the obvious fix — patch the Service in-place — gets reverted on the next reconcile loop within ~10s (verified empirically on v0.8.0).Today the only workaround is a sibling "wrapper" Service per agent in a separate manifest, with a duplicated pod selector. Six agents in our deployment ⇒ six wrapper Services. Two boilerplate Services per A2A agent is a lot of Kubernetes for what should be a one-liner in the operator's translator.
💡 Proposed Solution
In
internal/controller/translator/agent/conversion.go(or wherever the agent Service is generated), set theappProtocolfield on the agent port when the source Agent CR hasspec.declarative.a2aConfignon-nil:A bare
httpAgent (no A2A) would keep the field unset, exactly as today.🔄 Alternatives Considered
oc patchof the operator-managed Service — reverted by the controller within seconds.Servicefield on the Agent CR allowing arbitrary spec overrides — more flexible but heavier; the appProtocol case is mechanical and doesn't need configurability.Additional Context
oc explain agentgatewaybackends.agentgateway.dev.specconfirms agentgateway has no dedicateda2abackend type yet —appProtocolon the Service is the only documented integration mechanism for A2A in K8s.appProtocol: kgateway.dev/mcponargocd-mcp-server,db2-luw-mcp-server, etc.).go/core/internal/controller/translator/agent/conversion.goalready has the surrounding shape — it readssvc.Spec.Ports[].AppProtocolfor MCP detection but never writes it back on agent services.Happy to send a PR if there's interest.