CLATTER: Comprehensive Entailment Reasoning for Hallucination Detection


Journal article


Ron Eliav, Arie Cattan, Eran Hirsch, Shahaf Bassan, Elias Stengel-Eskin, Mohit Bansal, Ido Dagan
under-review, 2025

Paper Poster
Cite

Cite

APA   Click to copy
Eliav, R., Cattan, A., Hirsch, E., Bassan, S., Stengel-Eskin, E., Bansal, M., & Dagan, I. (2025). CLATTER: Comprehensive Entailment Reasoning for Hallucination Detection. Under-Review.


Chicago/Turabian   Click to copy
Eliav, Ron, Arie Cattan, Eran Hirsch, Shahaf Bassan, Elias Stengel-Eskin, Mohit Bansal, and Ido Dagan. “CLATTER: Comprehensive Entailment Reasoning for Hallucination Detection.” under-review (2025).


MLA   Click to copy
Eliav, Ron, et al. “CLATTER: Comprehensive Entailment Reasoning for Hallucination Detection.” Under-Review, 2025.


BibTeX   Click to copy

@article{ron2025a,
  title = {CLATTER: Comprehensive Entailment Reasoning for Hallucination Detection},
  year = {2025},
  journal = {under-review},
  author = {Eliav, Ron and Cattan, Arie and Hirsch, Eran and Bassan, Shahaf and Stengel-Eskin, Elias and Bansal, Mohit and Dagan, Ido}
}

Abstract

A common approach to hallucination detection casts it as a natural language inference (NLI) task, often using LLMs to classify whether the generated text is entailed by corresponding reference texts. Since entailment classification is a complex reasoning task, one would expect that LLMs could benefit from generating an explicit reasoning process, as in CoT reasoning or the explicit ``thinking'' of recent reasoning models. In this work, we propose that guiding such models to perform a systematic and comprehensive reasoning process -- one that both decomposes the text into smaller facts and also finds evidence in the source for each fact -- allows models to execute much finer-grained and accurate entailment decisions, leading to increased performance. To that end, we define a 3-step reasoning process, consisting of (i) claim decomposition, (ii) sub-claim attribution and entailment classification, and (iii) aggregated classification, showing that such guided reasoning indeed yields improved hallucination detection. Following this reasoning framework, we introduce an analysis scheme, consisting of several metrics that measure the quality of the intermediate reasoning steps, which provided additional empirical evidence for the improved quality of our guided reasoning scheme.



Tools
Translate to