Improving Candidate Quality of Probabilistic Logic Models

Authors Joana Côrte-Real , Anton Dries , Inês Dutra , Ricardo Rocha



PDF
Thumbnail PDF

File

OASIcs.ICLP.2018.6.pdf
  • Filesize: 491 kB
  • 14 pages

Document Identifiers

Author Details

Joana Côrte-Real
  • CRACS & INESC TEC and Faculty of Sciences, University of Porto, Rua do Campo Alegre, 1021, 4169-007 Porto, Portugal
Anton Dries
  • KU Leuven, Department of Computer Science, Celestijnenlaan 200A bus 2402, 3001 Leuven, Belgium
Inês Dutra
  • CINTESIS and Faculty of Sciences, University of Porto, Rua do Campo Alegre, 1021, 4169-007 Porto, Portugal
Ricardo Rocha
  • CRACS & INESC TEC and Faculty of Sciences, University of Porto, Rua do Campo Alegre, 1021, 4169-007 Porto, Portugal

Cite AsGet BibTex

Joana Côrte-Real, Anton Dries, Inês Dutra, and Ricardo Rocha. Improving Candidate Quality of Probabilistic Logic Models. In Technical Communications of the 34th International Conference on Logic Programming (ICLP 2018). Open Access Series in Informatics (OASIcs), Volume 64, pp. 6:1-6:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)
https://doi.org/10.4230/OASIcs.ICLP.2018.6

Abstract

Many real-world phenomena exhibit both relational structure and uncertainty. Probabilistic Inductive Logic Programming (PILP) uses Inductive Logic Programming (ILP) extended with probabilistic facts to produce meaningful and interpretable models for real-world phenomena. This merge between First Order Logic (FOL) theories and uncertainty makes PILP a very adequate tool for knowledge representation and extraction. However, this flexibility is coupled with a problem (inherited from ILP) of exponential search space growth and so, often, only a subset of all possible models is explored due to limited resources. Furthermore, the probabilistic evaluation of FOL theories, coming from the underlying probabilistic logic language and its solver, is also computationally demanding. This work introduces a prediction-based pruning strategy, which can reduce the search space based on the probabilistic evaluation of models, and a safe pruning criterion, which guarantees that the optimal model is not pruned away, as well as two alternative more aggressive criteria that do not provide this guarantee. Experiments performed using three benchmarks from different areas show that prediction pruning is effective in (i) maintaining predictive accuracy for all criteria and experimental settings; (ii) reducing the execution time when using some of the more aggressive criteria, compared to using no pruning; and (iii) selecting better candidate models in limited resource settings, also when compared to using no pruning.

Subject Classification

ACM Subject Classification
  • Computing methodologies → Probabilistic reasoning
Keywords
  • Relational Machine Learning
  • Probabilistic Inductive Logic Programming
  • Search Space Pruning
  • Model Quality
  • Experiments

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. E. Bellodi and F. Riguzzi. Learning the structure of probabilistic logic programs. In Inductive Logic Programming, pages 61-75. Springer, 2012. Google Scholar
  2. E. Bellodi and F. Riguzzi. Structure learning of probabilistic logic programs by searching the clause space. Theory and Practice of Logic Programming, 15(02):169-212, 2015. Google Scholar
  3. J. Chen, S. Muggleton, and J. Santos. Learning Probabilistic Logic Models from Probabilistic Examples. Machine Learning, 73(1):55-85, October 2008. URL: http://dx.doi.org/10.1007/s10994-008-5076-4.
  4. J. Côrte-Real, I. Dutra, and R. Rocha. Estimation-Based Search Space Traversal in PILP Environments. In A. Russo and J. Cussens, editors, Proceedings of the 26th International Conference on Inductive Logic Programming (ILP 2016), LNAI, pages -, London, UK, September 2016. Springer. Published in 2017. Google Scholar
  5. J. Côrte-Real, T. Mantadelis, I. Dutra, R. Rocha, and E. Burnside. SkILL - a Stochastic Inductive Logic Learner. In International Conference on Machine Learning and Applications, pages -, Miami, Florida, USA, December 2015. Google Scholar
  6. V. Santos Costa, R. Rocha, and L. Damas. The YAP Prolog System. Journal of Theory and Practice of Logic Programming, 12(1 & 2):5-34, 2012. Google Scholar
  7. L. De Raedt, A. Dries, I. Thon, G. Van den Broeck, and M. Verbeke. Inducing Probabilistic Relational Rules from Probabilistic Examples. In International Joint Conference on Artificial Intelligence, pages 1835-1843. AAAI Press, 2015. Google Scholar
  8. L. De Raedt and A. Kimmig. Probabilistic (logic) programming concepts. Machine Learning, 100(1):5-47, 2015. URL: http://dx.doi.org/10.1007/s10994-015-5494-z.
  9. L. De Raedt and I. Thon. Probabilistic Rule Learning. In Inductive Logic Programming, pages 47-58. Springer, 2011. Google Scholar
  10. Daan Fierens, Guy Van den Broeck, Joris Renkens, Dimitar Shterionov, Bernd Gutmann, Ingo Thon, Gerda Janssens, and Luc De Raedt. Inference and Learning in Probabilistic Logic Programs using Weighted Boolean Formulas. Theory and Practice of Logic Programming, 15(3):358-401, 2015. Google Scholar
  11. J. Halpern. An Analysis of First-Order Logics of Probability. Artificial intelligence, 46(3):311-350, 1990. Google Scholar
  12. K. Kersting, L. De Raedt, and S. Kramer. Interpreting Bayesian Logic Programs. In AAAI Workshop on Learning Statistical Models from Relational Data, pages 29-35, 2000. Google Scholar
  13. A. Kimmig, B. Demoen, L. De Raedt, V. Santos Costa, and R. Rocha. On the Implementation of the Probabilistic Logic Programming Language ProbLog. Theory and Practice of Logic Programming, 11(2 & 3):235-262, 2011. Google Scholar
  14. S. Kok and P. Domingos. Learning the Structure of Markov Logic Networks. In International Conference on Machine learning, pages 441-448. ACM, 2005. Google Scholar
  15. S. Muggleton. Stochastic Logic Programs. Advances in inductive logic programming, 32:254-264, 1996. Google Scholar
  16. S. Muggleton, J. Santos, C. Almeida, and A. Tamaddoni-Nezhad. TopLog: ILP Using a Logic Program Declarative Bias. In International Conference on Logic Programming, pages 687-692. Springer, 2008. Google Scholar
  17. D. Poole. The independent choice logic for modelling multiple agents under uncertainty. Artificial intelligence, 94(1):7-56, 1997. Google Scholar
  18. M. Richardson and P. Domingos. Markov Logic Networks. Machine learning, 62(1-2):107-136, 2006. Google Scholar
  19. V. Santos Costa, D. Page, M. Qazi, and J. Cussens. CLP(BN): Constraint Logic Programming for Probabilistic Knowledge. In Conference on Uncertainty in Artificial Intelligence, pages 517-524, 2002. Google Scholar
  20. T. Sato and Y. Kameya. PRISM: A language for symbolic-statistical modeling. In International Joint Conference on Artificial Intelligence, volume 97, pages 1330-1339. Morgan Kaufmann, 1997. URL: http://ijcai.org/Proceedings/97-2/Papers/078.pdf.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail