Open-source skills framework for Claude Code — 6 cognitive firewalls that prevent hallucination and bias in AI agents
-
Updated
Apr 17, 2026
Open-source skills framework for Claude Code — 6 cognitive firewalls that prevent hallucination and bias in AI agents
Repo-native protocol for AI-assisted coding that enforces a simple discipline: research first, plan second, code last. Drop it into any repository to reduce wrong implementations, cut rewrite cycles, and improve decisions earlier in the workflow. Works with Cursor, VS Code, Claude Code, and Windsurf across Claude, GPT, Gemini, Grok, and DeepSeek.
29 .mdc architecture rules that prevent AI coding assistants from hallucinating insecure auth, deprecated imports, and broken Next.js 15 patterns. Built for Cursor Agent and Claude Code.
A 5-layer adversarial quality gate for Claude Code. Catches factual errors, score inflation, and buried conclusions before your AI output ships.
Physical Layer Linter — MCP server that validates RF link budgets, Shannon capacity, and noise floors against hard physical limits
[Veracity] Dual-LLM hallucination defense — adversarial verification with Localization Gap detection for Arabic knowledge
Stop your AI from hallucinating its own history. Session discipline for Claude Code — the problem everyone has but nobody else is solving mechanically.
The First Open-Source GEO Audit Tool for DeepSeek. (中国首款适配 DeepSeek 的企业级 GEO 审计工具)
Watchdog — AI session immune system for Claude Code. Detects stuck loops, hallucinations, task drift & context decay. Like /btw but for session health.
Verify BibTeX references against OpenAlex & CrossRef to detect errors and AI hallucinations
A truth filter for AI output. An experiment: I pointed property-based testing (Hegel / Hypothesis lineage) at a specification instead of code. Ran an AI-generated 36 KB research synthesis through the harness — 27 of 28 claims held, 1 was falsified and re-encoded to pass, 6 small structural ingredients surfaced. One case write-up.
Open-source AI reasoning auditor for legal citations. Verifies existence, quote accuracy, and logical coherence against primary sources.
This is a SAMPLE of how roles can be used to give the LLM a sense of self and to help it identify the users purpose and ensure greater governance over actions
Stop AI hallucinations with evidence checks that block false claims and enforce verified before-and-after history
Ask Twice 🔍 — AI 答案可信度验证浏览器插件 | AI Response Fact-Checker Chrome Extension
from concept to perceptual lens for ai
Add a description, image, and links to the ai-hallucination topic page so that developers can more easily learn about it.
To associate your repository with the ai-hallucination topic, visit your repo's landing page and select "manage topics."