You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Neurosymbolic framework that catches LLM hallucinations mid-reasoning using causal graphs and GNNs. Converts chain-of-thought outputs into Causal Reasoning Graphs, detects flawed steps with a Graph Attention Network, and auto-corrects via RAG context injection ... before errors propagate.
Evidence-grounded medical RAG system that retrieves FDA and NICE drug guidelines, generates cited answers, and safely refuses unsupported queries to minimize hallucinations.
Explicit control and observability over when an LLM should answer, hedge, or refuse — treating generation as a governed system layer, not a side effect of retrieval.
Emergent pseudo-intimacy and emotional overflow in long-term human-AI dialogue: A case study on LLM behavior in affective computing and human-AI intimacy.