Omni-NLI is a self-hostable server that provides natural language inference (NLI) capabilities via RESTful and the Model Context Protocol (MCP) interfaces. It can be used both as a very scalable standalone stateless microservice (via the REST API) and also as an MCP server for AI agents to implement a verification layer for AI-based applications.
Given two pieces of text called premise and hypothesis, NLI (AKA textual entailment) is the task of determining the directional relationship between them as it is perceived by a human reader. The relationship is given one of these three labels:
"entailment": the hypothesis is supported by the premise"contradiction": the hypothesis is contradicted by the premise"neutral": the hypothesis is neither supported nor contradicted by the premise
Important
NLI is not the same as logical entailment. Its goal is to determine if a reasonable human would consider the hypothesis to follow from the premise. This checks for consistency instead of the absolute truth of the hypothesis.
Typical applications of NLI include:
- NLI can be used to check if a given piece of text is consistent with the rest of the text. For example, if a new response from a chatbot or AI assistant contradicts something that was said earlier in the conversation.
- It can be used to check if a summarization contradicts the original text in some way.
- It can be used to check if the documents in the ranked list of results entail the query.
- It can be used to check if a piece of text is supported by some facts. Note that this is not the same as using logic.
Important
The quality of the results depends a lot on the model (the LLM) that is used. A good strategy is to first fine-tune the model using a dataset of premise-hypothesis-label triples that are relevant to your application domain.
- Helps mitigate LLM hallucinations by verifying if the generated content is supported by facts
- Supports models provided by different backends, including Ollama, HuggingFace (public and private/gated models), and OpenRouter
- Supports REST API (for traditional applications) and MCP (for AI agents) interfaces
- Fully configurable and very scalable, with built-in caching
- Provides confidence scores and (optional) reasoning traces for explainability
See ROADMAP.md for the list of implemented and planned features.
Important
Omni-NLI is in early development, so bugs and breaking changes are expected. Please use the issues page to report bugs or request features.
pip install omni-nli[huggingface]omni-nlicurl -X POST \
-H "Content-Type: application/json" \
-d '{
"premise": "A football player kicks a ball into the goal.",
"hypothesis": "The football player is asleep on the field."
}' \
http://127.0.0.1:8000/api/v1/nli/evaluateExample response:
{
"label": "contradiction",
"confidence": 0.99,
"model": "microsoft/Phi-3.5-mini-instruct",
"backend": "huggingface"
}Check out the Omni-NLI Documentation for more information, including configuration options, API reference, and examples.
Contributions are always welcome! Please see CONTRIBUTING.md for details on how to get started.
Omni-NLI is licensed under the MIT License (see LICENSE).
- The logo is from SVG Repo with some modifications.
