UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
-
Updated
Apr 16, 2026 - Python
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Always keep your codebases ready for Agents. Improve any coding workflow by atleast 2x by maintaing a live, pluggable context layer per repo that creates and maintains Agents.md
[ACL 2024] ANAH & [NeurIPS 2024] ANAH-v2 & [ICLR 2025] Mask-DPO
[EMNLP'25] A novel alignment framework that leverages image retrieval to mitigate hallucinations in Vision Language Models.
FastMemory is a topological representation of text data using concepts as the primary input. It helps in improving the RAG(by replacing embedding and vectors entirely), AI memory and LLM queries by upto 100% as in the huggingface benchmarks(22+ SOTA)
[ICLR 2025] Data-Augmented Phrase-Level Alignment for Mitigating Object Hallucination
✨ Official code for our paper: "Uncertainty-o: One Model-agnostic Framework for Unveiling Epistemic Uncertainty in Large Multimodal Models".
Official implementation for the NeurIPS 2025 paper: "FlexAC: Towards Flexible Control of Associative Reasoning in Multimodal Large Language Models". A lightweight, training-free framework for modulating associative behavior in MLLMs.
Oscillink — Self‑Optimizing Coherent Memory for Embedding Workflows
Official PyTorch implementation of "LPOI: Listwise Preference Optimization for Vision Language Models" (ACL 2025 Main)
A collection of papers on LLM/LVLM hallucination evaluation, detection, and mitigation
[ACL 2026] 📷 CCD: Mitigating Hallucinations in Radiology MLLMs via Clinical Contrastive Decoding — A flexible, plug-and-play toolkit for various radiology MLLM backbones, further boosting overall performance
[NeurIPS 2025] Official implementation for ''Robust Hallucination Detection in LLMs via Adaptive Token Selection'' https://arxiv.org/abs/2504.07863
AEP - Agent Element Protocol (v2.0) | A deterministic governance architecture for AI coding agents to build all sorts of components that follow rigid build schemas.
[CVPR 2025 Workshop] PAINT (Paying Attention to INformed Tokens) is a plug-and-play framework that intervenes in the self-attention of the LLM and selectively boost the visual attention informed tokens to mitigate hallucination of Vision Language Models
Stop hallucinations by verifying citations.
Unofficial implementation of Microsoft’s Claimify Paper: extracts specific, verifiable, decontextualized claims from LLM Q&A to be used for Hallucination, Groundedness, Relevancy and Truthfulness detection
Agentic-AI framework w/o the headaches
Add a description, image, and links to the hallucination-mitigation topic page so that developers can more easily learn about it.
To associate your repository with the hallucination-mitigation topic, visit your repo's landing page and select "manage topics."