Please read this first
- Have you read the docs? Yes
- Have you searched for related issues? Yes. Found no existing issue for per-tool authorization middleware in this SDK.
Describe the feature
The SDK has guardrails for input/output validation and human-in-the-loop for approval flows. What's missing is a per-tool authorization layer that evaluates whether a tool call should execute based on identity, scope, rate limits, and session context.
Guardrails check content. Authorization checks permission. Both are needed but they solve different problems.
Example: An agent with access to send_email and query_database passes every guardrail check but uses those tools to exfiltrate customer data to an external address. The content looks normal. The authorization was never checked.
Proposed solution: A per-tool authorization middleware that intercepts tool calls before execution:
- Evaluate each tool call against identity, role, scope, and session state
- Support non-binary decisions: ALLOW, DENY, MODIFY, DEFER, STEP_UP
- Generate structured audit records for every decision
- Non-invasive: existing tools work without code changes
I'm the creator of AgentLock (Apache 2.0), an open authorization standard for AI agent tool calls. Happy to build an "openai-agentlock" integration as a separate PyPI package if that's the preferred approach.
Benchmark: 99.5/A against 222 adversarial attack vectors across 35 categories.
GitHub: github.com/webpro255/agentlock
Please read this first
Describe the feature
The SDK has guardrails for input/output validation and human-in-the-loop for approval flows. What's missing is a per-tool authorization layer that evaluates whether a tool call should execute based on identity, scope, rate limits, and session context.
Guardrails check content. Authorization checks permission. Both are needed but they solve different problems.
Example: An agent with access to
send_emailandquery_databasepasses every guardrail check but uses those tools to exfiltrate customer data to an external address. The content looks normal. The authorization was never checked.Proposed solution: A per-tool authorization middleware that intercepts tool calls before execution:
I'm the creator of AgentLock (Apache 2.0), an open authorization standard for AI agent tool calls. Happy to build an "openai-agentlock" integration as a separate PyPI package if that's the preferred approach.
Benchmark: 99.5/A against 222 adversarial attack vectors across 35 categories.
GitHub: github.com/webpro255/agentlock