The Learning-Knowledge-Reasoning Paradigm for Natural Language Understanding and Question Answering

Author Arindam Mitra

Thumbnail PDF


  • Filesize: 0.73 MB
  • 6 pages

Document Identifiers

Author Details

Arindam Mitra
  • Arizona State University, Tempe, USA

Cite AsGet BibTex

Arindam Mitra. The Learning-Knowledge-Reasoning Paradigm for Natural Language Understanding and Question Answering. In Technical Communications of the 34th International Conference on Logic Programming (ICLP 2018). Open Access Series in Informatics (OASIcs), Volume 64, pp. 19:1-19:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Given a text, several questions can be asked. For some of these questions, the answer can be directly looked up from the text. However for several other questions, one might need to use additional knowledge and sophisticated reasoning to find the answer. Developing AI agents that can answer these kinds of questions and can also justify their answer is the focus of this research. Towards this goal, we use the language of Answer Set Programming as the knowledge representation and reasoning language for the agent. The question then arises, is how to obtain the additional knowledge? In this work we show that using existing Natural Language Processing parsers and a scalable Inductive Logic Programming algorithm it is possible to learn this additional knowledge (containing mostly commonsense knowledge) from question-answering datasets which then can be used for inference.

Subject Classification

ACM Subject Classification
  • Computing methodologies → Natural language processing
  • Computing methodologies → Knowledge representation and reasoning
  • Natural Language Understanding
  • Question Answering
  • Knowledge Acquisition
  • Inductive Logic Programming
  • Knowledge Representation and Reasoning


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. Peter Clark. Elementary school science and math tests as a driver for AI: take the aristo challenge! Innovative Applications of Artificial Intelligence, 2015. Google Scholar
  2. Michael Gelfond and Vladimir Lifschitz. The stable model semantics for logic programming. In ICLP/SLP, volume 88, pages 1070-1080, 1988. Google Scholar
  3. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. Learning to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 523-533, 2014. Google Scholar
  4. Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. Parsing algebraic word problems into equations. Transactions of the Association for Computational Linguistics, 3:585-597, 2015. Google Scholar
  5. Arindam Mitra and Chitta Baral. Addressing a Question Answering Challenge by Combining Statistical Methods with Inductive Rule Learning and Reasoning. In AAAI, pages 2779-2785, 2016. Google Scholar
  6. Arindam Mitra and Chitta Baral. Incremental and Iterative Learning of Answer Set Programs from Mutually Distinct Examples. CoRR, abs/1802.07966, 2018. URL:
  7. Stephen Muggleton. Inductive logic programming. New generation computing, 8(4):295-318, 1991. Google Scholar
  8. Stephen Muggleton. Inverse entailment and Progol. New generation computing, 13(3-4):245-286, 1995. Google Scholar
  9. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards AI-complete question answering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015. URL: