국가법령정보MCP | 법제처 41개 API → 16개 MCP 도구. 법령·판례·조례·조약을 AI로 검색·조회·분석 + LLM 환각 방지 인용 검증 | 41 Korean legal APIs → 16 MCP tools with AI citation hallucination guard
-
Updated
Apr 18, 2026 - TypeScript
국가법령정보MCP | 법제처 41개 API → 16개 MCP 도구. 법령·판례·조례·조약을 AI로 검색·조회·분석 + LLM 환각 방지 인용 검증 | 41 Korean legal APIs → 16 MCP tools with AI citation hallucination guard
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
[NeurIPS 2025] SECA: Semantically Equivalent and Coherent Attacks for Eliciting LLM Hallucinations
RAG Hallucination Detecting By LRP.
CRoPS (TMLR)
Novel Hallucination detection method
Build your own open-source REST API endpoint to detect hallucination in LLM generated responses.
Semi-supervised pipeline to detect LLM hallucinations. Uses Mistral-7B for zero-shot pseudo-labeling and DeBERTa for efficient classification.
Research paper on how agentic debate pipelines can be constructed to reduce hallucinations in LLMs with open-source and commercial models
Lecture-RAG is a grounding-aware Video-RAG framework that reduces hallucinations and supports algorithmic reasoning in educational, Slide based, Black board tutorial videos.
Automated detection, visualization and suppression of hallucination-associated neurons in open-source LLMs — LLM mechanistic interpretability research tool
Source code for the paper: A Hallucination Mitigation Scheme in Security Policy Generation with Large Language Models
This repository contains the codebase for the PoC of LLM package hallucination and associated vulnerabilties.
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
Add a description, image, and links to the llm-hallucination topic page so that developers can more easily learn about it.
To associate your repository with the llm-hallucination topic, visit your repo's landing page and select "manage topics."