Skip to content
#

secure-ai

Here are 28 public repositories matching this topic...

This project integrates Hyperledger Fabric with machine learning to enhance transparency and trust in data-driven workflows. It outlines a blockchain-based strategy for data traceability, model auditability, and secure ML deployment across consortium networks.

  • Updated May 29, 2025
  • Shell

IntentusNet - Deterministic execution infrastructure for agent and distributed systems, enabling reproducible workflows, reliable intent routing, transport abstraction, and transparent operational control.

  • Updated Feb 18, 2026
  • Python
sentinel-os-core

Offline-first cognitive operating system for synthetic intelligence. Features belief ecology, RL-based goal evolution with differential privacy, contradiction tracing, HMAC-signed audit logs, sandboxed execution, and local LLM inference. Designed for air-gapped, adversarial environments.

  • Updated Feb 5, 2026
  • Python

Behavior-driven cognitive experimentation toolkit with BCE (Behavioral Consciousness Engine) regularization, telemetry, and plug-and-play integrators for language-model training and evaluation.

  • Updated Jan 4, 2026
  • Python

Static analysis CLI that scans codebases for LLM prompt-injection, data-exfiltration, jailbreak, and unsafe agent/tool vulnerabilities. Runs fully offline, integrates with CI/CD, and outputs console, JSON, and SARIF reports.

  • Updated Mar 4, 2026
  • TypeScript

airlock is a cryptographic handshake protocol for verifying AI model identity at runtime. It enables real-time attestation of model provenance, environment integrity, and agent authenticity - without relying on vendor trust or static manifests.

  • Updated Nov 28, 2025

Improve this page

Add a description, image, and links to the secure-ai topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the secure-ai topic, visit your repo's landing page and select "manage topics."

Learn more