Skip to content

Python: [Python][Agents] AgentMesh: Trust and Governance Layer#13517

Open
imran-siddique wants to merge 1 commit intomicrosoft:mainfrom
imran-siddique:contrib/agent-os-governance
Open

Python: [Python][Agents] AgentMesh: Trust and Governance Layer#13517
imran-siddique wants to merge 1 commit intomicrosoft:mainfrom
imran-siddique:contrib/agent-os-governance

Conversation

@imran-siddique
Copy link
Member

Summary

Adds agentmesh module to \semantic_kernel.agents\ providing cryptographic identity verification and governance controls for Semantic Kernel agents.

Features

Trust Layer

  • CMVKIdentity: Ed25519-based cryptographic identity
  • TrustedAgentCard: Agent discovery and verification
  • TrustHandshake: Peer verification protocol

Governance Layer

  • GovernancePolicy: Comprehensive policy configuration
  • GovernedAgent: Agent wrapper with policy enforcement
  • GovernanceKernel: Kernel wrapper with governance controls

Governance Capabilities

Feature Description
Rate Limiting Per-minute and per-hour limits
Function Control Allow/deny lists for functions
Resource Limits Concurrent tasks, memory limits
Audit Logging Full invocation audit trail
Trust Requirements Identity verification, trust scores

Example

\\python
from semantic_kernel.agents.agentmesh import (
CMVKIdentity,
GovernedAgent,
GovernancePolicy,
)

identity = CMVKIdentity.generate('assistant', capabilities=['chat'])

policy = GovernancePolicy(
max_requests_per_minute=30,
allowed_functions=['chat'],
audit_all_invocations=True,
)

governed = GovernedAgent(agent=base_agent, identity=identity, policy=policy)
\\

Related

Adds agentmesh module to semantic_kernel.agents providing:
- CMVKIdentity: Cryptographic identity with Ed25519 keys
- TrustedAgentCard: Agent discovery and verification
- TrustHandshake: Peer verification protocol
- GovernancePolicy: Rate limiting, capability control, auditing
- GovernedAgent: Agent wrapper with governance enforcement
- GovernanceKernel: Kernel wrapper with policy enforcement

Features:
- Rate limiting (per-minute and per-hour)
- Function allow/deny lists
- Resource limits (concurrent tasks, memory)
- Full audit logging
- Trust score thresholds
- Policy violation tracking
@imran-siddique imran-siddique requested a review from a team as a code owner February 6, 2026 21:48
@moonbox3 moonbox3 added python Pull requests for the Python Semantic Kernel documentation labels Feb 6, 2026
@github-actions github-actions bot changed the title [Python][Agents] AgentMesh: Trust and Governance Layer Python: [Python][Agents] AgentMesh: Trust and Governance Layer Feb 6, 2026
@moonbox3
Copy link
Collaborator

moonbox3 commented Feb 6, 2026

@imran-siddique
Copy link
Member Author

Ready for Final Review 🙏

This PR has been open for a while. The AgentMesh trust layer integration is complete and tested.

Could a maintainer please provide a final review? Happy to address any remaining concerns.

Thank you!

@moonbox3
Copy link
Collaborator

moonbox3 commented Feb 7, 2026

What's the requirement/need driving this?

@imran-siddique
Copy link
Member Author

Great question! The need comes from several production multi-agent scenarios:

Key Requirements

  1. Identity Verification - When agents communicate (A2A, multi-agent orchestration), there's no built-in way to verify "who" you're talking to. Without cryptographic identity, any process can claim to be any agent.

  2. Trust-Gated Operations - Sensitive operations (code execution, data access, external API calls) should only be allowed from verified, trusted agents. This module provides configurable trust thresholds per operation.

  3. Audit Compliance - Enterprise deployments need full audit trails of all agent invocations for compliance (GDPR, HIPAA, SOX). The governance layer logs every action with identity tracking.

  4. Rate Limiting & Resource Control - Prevent runaway agents from exhausting resources. The policy layer enforces per-minute/per-hour limits and concurrent task bounds.

Real Example

In a multi-agent system where:

  • Agent A requests Agent B to execute code
  • Agent B should verify Agent A is trusted before execution
  • All interactions must be logged for audit

This module makes that possible with minimal code changes:

\\python
governed = GovernedAgent(agent=base_agent, identity=identity, policy=policy)

Now all invocations are identity-verified, policy-checked, and audit-logged

\\

Similar integrations have been merged/submitted to AutoGen, CrewAI, A2A, and others. Happy to discuss specific use cases!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation python Pull requests for the Python Semantic Kernel

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants