Provides natural language inference capabilities through a multi-interface server, supporting REST API and Model Context Protocol.
Sponsored
Omni-NLI is a self-hostable server designed to deliver natural language inference (NLI) functionalities. It allows developers to determine the relationship between a premise (fact) and a hypothesis (claim), classifying it as "entailment," "contradiction," or "neutral." This server can integrate seamlessly into traditional applications via a REST API or serve as an MCP server, providing a crucial verification layer for AI agents and applications. Its applications include fact-checking the output of large language models (LLMs) and validating question-answering system responses.
Key Features
01Fully configurable and designed for high scalability.
02Supports multiple backend models, including Ollama, HuggingFace, and OpenRouter.
030 GitHub stars
04Offers both REST API for traditional applications and Model Context Protocol (MCP) for AI agents.
Use Cases
01Fact-checking the output of large language models (LLMs).
02Verifying the correctness of answers generated by question-answering systems.
03Implementing a verification layer for AI-based applications through AI agents.