AgentGate is a source-available security and governance layer for AI agents. It adds PII protection, injection defense, approvals, rate limits, audit evidence, and formal verification around model and tool calls without forcing teams to rebuild their application stack.
- Redact PII before prompts leave your infrastructure.
- Block common attack classes such as SQL injection, shell injection, XSS, and prompt attacks.
- Enforce human approvals, rate limits, and budget controls around sensitive operations.
- Capture tamper-evident audit logs and signed decision certificates.
- Run the same project as an SDK, API service, dashboard, CLI, and MCP security server.
pip install ea-agentgateInstall the full server profile when you want the local API, dashboard, auth, and governance surfaces:
pip install "ea-agentgate[server]"from ea_agentgate import Agent
from ea_agentgate.middleware import PIIVault, Validator
agent = Agent(
middleware=[
PIIVault(mask_ssn=True, mask_email=True, mask_credit_card=True),
Validator(block_sql_injection=True, block_shell_injection=True),
]
)- Repository: github.com/eacognitive/agentgate
- Full README: GitHub README
- Issues: github.com/eacognitive/agentgate/issues
The repository demo stack includes a dashboard playground. To get real model responses in that
playground, set OPENAI_API_KEY in the root .env file before running ./run demo --fresh.