Open framework for confidential AI
-
Updated
Mar 6, 2026 - Rust
Open framework for confidential AI
Reading list for adversarial perspective and robustness in deep reinforcement learning.
Let AI agents like ChatGPT & Claude use real-world local/remote tools you approve via browser extension + optional MCP server
Build secure mcp infrastructure to audit and control every data access by AI agents with minimal effort
A living map of the AI agent security ecosystem.
This project integrates Hyperledger Fabric with machine learning to enhance transparency and trust in data-driven workflows. It outlines a blockchain-based strategy for data traceability, model auditability, and secure ML deployment across consortium networks.
Forge is a secure, portable AI Agent runtime. Run agents locally, in cloud, or enterprise environments without exposing inbound tunnels.
Secure Computing in the AI age
IntentusNet - Deterministic execution infrastructure for agent and distributed systems, enabling reproducible workflows, reliable intent routing, transport abstraction, and transparent operational control.
Project Agora: MVP of the Concordia framework. An ethical, symbiotic AI designed to foster and protect human flourishing.
Secure local-first desktop layer for OpenClaw featuring voice, canvas, and hardened security guardrails.
Secure Python Chatbot with PANW AIRS protection and Claude API
Real-time code analysis that detects cross-file semantic errors, type inconsistencies, array bound violations, and function signature drift while you type, before files are saved, without external security APIs.
Secure Python Chatbot with PANW AIRS protection and OpenAI API
💻🔒 A local-first full-stack app to analyze medical PDFs with an AI model (Apollo2-2B), ensuring privacy & patient-friendly insights — no external APIs or cloud involved.
💻🔒 A local-first full-stack app to analyze medical PDFs with an AI model (Apollo2-2B), ensuring privacy & patient-friendly insights — no external APIs or cloud involved.
Offline-first cognitive operating system for synthetic intelligence. Features belief ecology, RL-based goal evolution with differential privacy, contradiction tracing, HMAC-signed audit logs, sandboxed execution, and local LLM inference. Designed for air-gapped, adversarial environments.
Behavior-driven cognitive experimentation toolkit with BCE (Behavioral Consciousness Engine) regularization, telemetry, and plug-and-play integrators for language-model training and evaluation.
Static analysis CLI that scans codebases for LLM prompt-injection, data-exfiltration, jailbreak, and unsafe agent/tool vulnerabilities. Runs fully offline, integrates with CI/CD, and outputs console, JSON, and SARIF reports.
airlock is a cryptographic handshake protocol for verifying AI model identity at runtime. It enables real-time attestation of model provenance, environment integrity, and agent authenticity - without relying on vendor trust or static manifests.
Add a description, image, and links to the secure-ai topic page so that developers can more easily learn about it.
To associate your repository with the secure-ai topic, visit your repo's landing page and select "manage topics."