13 Search Results for "Kersting, Kristian"


Document
Recent Advancements in Tractable Probabilistic Inference (Dagstuhl Seminar 22161)

Authors: Priyank Jaini, Kristian Kersting, Antonio Vergari, and Max Welling

Published in: Dagstuhl Reports, Volume 12, Issue 4 (2022)


Abstract
In several real-world scenarios, decision making involves advanced reasoning under uncertainty, i.e. the ability to answer probabilistic queries. Typically, it is necessary to compute these answers in a limited amount of time. Moreover, in many domains, such as healthcare and economical decision making, it is crucial that the result of these queries is reliable, i.e. either exact or comes with approximation guarantees. In all these scenarios, tractable probabilistic inference and learning are becoming increasingly important. Research on representations and learning algorithms for tractable inference embraces very different fields, each one contributing its own perspective. These include automated reasoning, probabilistic modeling, statistical and Bayesian inference and deep learning. Among the many recent emerging venues in these fields there are: tractable neural density estimators such as autoregressive models and normalizing flows; deep tractable probabilistic circuits such as sum-product networks and sentential decision diagrams; approximate inference routines with guarantees on the quality of the approximation. Each of these model classes occupies a particular spot in the continuum between tractability and expressiveness. That is, different model classes might offer appealing advantages in terms of efficiency or representation capabilities while trading-off other of these aspects. So far, clear connections and a deeper understanding of the key differences among them have been hindered by the different languages and perspectives adopted by the different "souls" that comprise the tractable probabilistic modeling community. This Dagstuhl Seminar brought together experts from these sub-communities and provided the perfect venue to exchange perspectives, deeply discuss the recent advancements and build strong bridges that can greatly propel interdisciplinary research.

Cite as

Priyank Jaini, Kristian Kersting, Antonio Vergari, and Max Welling. Recent Advancements in Tractable Probabilistic Inference (Dagstuhl Seminar 22161). In Dagstuhl Reports, Volume 12, Issue 4, pp. 13-25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Article{jaini_et_al:DagRep.12.4.13,
  author =	{Jaini, Priyank and Kersting, Kristian and Vergari, Antonio and Welling, Max},
  title =	{{Recent Advancements in Tractable Probabilistic Inference (Dagstuhl Seminar 22161)}},
  pages =	{13--25},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2022},
  volume =	{12},
  number =	{4},
  editor =	{Jaini, Priyank and Kersting, Kristian and Vergari, Antonio and Welling, Max},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.12.4.13},
  URN =		{urn:nbn:de:0030-drops-172785},
  doi =		{10.4230/DagRep.12.4.13},
  annote =	{Keywords: approximate inference with guarantees, deep generative models, probabilistic circuits, Tractable inference}
}
Document
SE4ML - Software Engineering for AI-ML-based Systems (Dagstuhl Seminar 20091)

Authors: Kristian Kersting, Miryung Kim, Guy Van den Broeck, and Thomas Zimmermann

Published in: Dagstuhl Reports, Volume 10, Issue 2 (2020)


Abstract
Multiple research disciplines, from cognitive sciences to biology, finance, physics, and the social sciences, as well as many companies, believe that data-driven and intelligent solutions are necessary. Unfortunately, current artificial intelligence (AI) and machine learning (ML) technologies are not sufficiently democratized - building complex AI and ML systems requires deep expertise in computer science and extensive programming skills to work with various machine reasoning and learning techniques at a rather low level of abstraction. It also requires extensive trial and error exploration for model selection, data cleaning, feature selection, and parameter tuning. Moreover, there is a lack of theoretical understanding that could be used to abstract away these subtleties. Conventional programming languages and software engineering paradigms have also not been designed to address challenges faced by AI and ML practitioners. In 2016, companies invested $26–39 billion in AI and McKinsey predicts that investments will be growing over the next few years. Any AI/ML-based systems will need to be built, tested, and maintained, yet there is a lack of established engineering practices in industry for such systems because they are fundamentally different from traditional software systems. This Dagstuhl Seminar brought together two rather disjoint communities together, software engineering and programming languages (PL/SE) and artificial intelligence and machine learning (AI-ML) to discuss open problems on how to improve the productivity of data scientists, software engineers, and AI-ML practitioners in industry.

Cite as

Kristian Kersting, Miryung Kim, Guy Van den Broeck, and Thomas Zimmermann. SE4ML - Software Engineering for AI-ML-based Systems (Dagstuhl Seminar 20091). In Dagstuhl Reports, Volume 10, Issue 2, pp. 76-87, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@Article{kersting_et_al:DagRep.10.2.76,
  author =	{Kersting, Kristian and Kim, Miryung and Van den Broeck, Guy and Zimmermann, Thomas},
  title =	{{SE4ML - Software Engineering for AI-ML-based Systems (Dagstuhl Seminar 20091)}},
  pages =	{76--87},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2020},
  volume =	{10},
  number =	{2},
  editor =	{Kersting, Kristian and Kim, Miryung and Van den Broeck, Guy and Zimmermann, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.10.2.76},
  URN =		{urn:nbn:de:0030-drops-130603},
  doi =		{10.4230/DagRep.10.2.76},
  annote =	{Keywords: correctness / explainability / traceability / fairness for ml, data scientist productivity, debugging/ testing / verification for ml systems}
}
Document
Logic and Learning (Dagstuhl Seminar 19361)

Authors: Michael Benedikt, Kristian Kersting, Phokion G. Kolaitis, and Daniel Neider

Published in: Dagstuhl Reports, Volume 9, Issue 9 (2020)


Abstract
The goal of building truly intelligent systems has forever been a central problem in computer science. While logic-based approaches of yore have had their successes and failures, the era of machine learning, specifically deep learning is also coming upon significant challenges. There is a growing consensus that the inductive reasoning and complex, high-dimensional pattern recognition capabilities of deep learning models need to be combined with symbolic (even programmatic), deductive capabilities traditionally developed in the logic and automated reasoning communities in order to achieve the next step towards building intelligent systems, including making progress at the frontier of hard problems such as explainable AI. However, these communities tend to be quite separate and interact only minimally, often at odds with each other upon the subject of the ``correct approach'' to AI. This report documents the efforts of Dagstuhl Seminar 19361 on ``Logic and Learning'' to bring these communities together in order to: (i) bridge the research efforts between them and foster an exchange of ideas in order to create unified formalisms and approaches that bear the advantages of both research methodologies; (ii) review and analyse the progress made across both communities; (iii) understand the subtleties and difficulties involved in solving hard problems using both perspectives; (iv) make attempts towards a consensus on what the hard problems are and what the elements of good solutions to these problems would be. The three focal points of the seminar were the strands of ``Logic for Machine Learning'', ``Machine Learning for Logic'', and ``Logic vs. Machine Learning''. The seminar format consisted of long and short talks, as well as breakout sessions. We summarise the motivations and proceedings of the seminar, and report on the abstracts of the talks and the results of the breakout sessions.

Cite as

Michael Benedikt, Kristian Kersting, Phokion G. Kolaitis, and Daniel Neider. Logic and Learning (Dagstuhl Seminar 19361). In Dagstuhl Reports, Volume 9, Issue 9, pp. 1-22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@Article{benedikt_et_al:DagRep.9.9.1,
  author =	{Benedikt, Michael and Kersting, Kristian and Kolaitis, Phokion G. and Neider, Daniel},
  title =	{{Logic and Learning (Dagstuhl Seminar 19361)}},
  pages =	{1--22},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2020},
  volume =	{9},
  number =	{9},
  editor =	{Benedikt, Michael and Kersting, Kristian and Kolaitis, Phokion G. and Neider, Daniel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.9.9.1},
  URN =		{urn:nbn:de:0030-drops-118425},
  doi =		{10.4230/DagRep.9.9.1},
  annote =	{Keywords: Artificial Intelligence, Automated reasoning, Databases, Deep Learning, Inductive Logic Programming, Logic, Logic and Learning, Logic for Machine Learning, Logic vs. Machine Learning, Machine Learning, Machine Learning for Logic, Neurosymbolic methods}
}
Document
07161 Abstracts Collection – Probabilistic, Logical and Relational Learning - A Further Synthesis

Authors: Luc De Raedt, Thomas Dietterich, Lise Getoor, Kristian Kersting, and Stephen H. Muggleton

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
From April 14 – 20, 2007, the Dagstuhl Seminar 07161 ``Probabilistic, Logical and Relational Learning - A Further Synthesis'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Luc De Raedt, Thomas Dietterich, Lise Getoor, Kristian Kersting, and Stephen H. Muggleton. 07161 Abstracts Collection – Probabilistic, Logical and Relational Learning - A Further Synthesis. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{deraedt_et_al:DagSemProc.07161.1,
  author =	{De Raedt, Luc and Dietterich, Thomas and Getoor, Lise and Kersting, Kristian and Muggleton, Stephen H.},
  title =	{{07161 Abstracts Collection – Probabilistic, Logical and Relational Learning - A Further Synthesis}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--21},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.1},
  URN =		{urn:nbn:de:0030-drops-13885},
  doi =		{10.4230/DagSemProc.07161.1},
  annote =	{Keywords: Artificial Intelligence, Uncertainty in AI, Probabilistic Reasoning, Knowledge Representation, Logic Programming, Relational Learning, Inductive Logic Programming, Graphical Models, Statistical Relational Learning, First-Order Logical and Relational Probabilistic Languages}
}
Document
A general framework for unsupervised preocessing of structured data

Authors: Barbara Hammer, Alessio Micheli, and Alessandro Sperduti

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
We propose a general framework for unsupervised recurrent and recursive networks. This proposal covers various popular approaches like standard self organizing maps (SOM), temporal Kohonen maps, resursive SOM, and SOM for structured data. We define Hebbian learning within this general framework. We show how approaches based on an energy function, like neural gas, can be transferred to this abstract framework so that proposals for new learning algorithms emerge.

Cite as

Barbara Hammer, Alessio Micheli, and Alessandro Sperduti. A general framework for unsupervised preocessing of structured data. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{hammer_et_al:DagSemProc.07161.2,
  author =	{Hammer, Barbara and Micheli, Alessio and Sperduti, Alessandro},
  title =	{{A general framework for unsupervised preocessing of structured data}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--6},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.2},
  URN =		{urn:nbn:de:0030-drops-13837},
  doi =		{10.4230/DagSemProc.07161.2},
  annote =	{Keywords: Relational clustering, median clustering, recursive SOM models, kernel SOM}
}
Document
Exploiting prior knowledge in Intelligent Assistants - Combining relational models with hierarchies

Authors: Sriraam Natarajan, Prasad Tadepalli, and Alan Fern

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
Statitsical relational models have been successfully used to model static probabilistic relationships between the entities of the domain. In this talk, we illustrate their use in a dynamic decison-theoretic setting where the task is to assist a user by inferring his intentional structure and taking appropriate assistive actions. We show that the statistical relational models can be used to succintly express the system's prior knowledge about the user's goal-subgoal structure and tune it with experience. As the system is better able to predict the user's goals, it improves the effectiveness of its assistance. We show through experiments that both the hierarchical structure of the goals and the parameter sharing facilitated by relational models significantly improve the learning speed.

Cite as

Sriraam Natarajan, Prasad Tadepalli, and Alan Fern. Exploiting prior knowledge in Intelligent Assistants - Combining relational models with hierarchies. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{natarajan_et_al:DagSemProc.07161.3,
  author =	{Natarajan, Sriraam and Tadepalli, Prasad and Fern, Alan},
  title =	{{Exploiting prior knowledge in Intelligent Assistants - Combining relational models with hierarchies}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--2},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.3},
  URN =		{urn:nbn:de:0030-drops-13856},
  doi =		{10.4230/DagSemProc.07161.3},
  annote =	{Keywords: Statistical Relational Learning, Intelligent Assistants}
}
Document
Learning Probabilistic Relational Dynamics for Multiple Tasks

Authors: Ashwin Deshpande, Brian Milch, Luke S. Zettlemoyer, and Leslie Pack Kaelbling

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
The ways in which an agent's actions affect the world can often be modeled compactly using a set of relational probabilistic planning rules. This extended abstract addresses the problem of learning such rule sets for multiple related tasks. We take a hierarchical Bayesian approach, in which the system learns a prior distribution over rule sets. We present a class of prior distributions parameterized by a rule set prototype that is stochastically modified to produce a task-specific rule set. We also describe a coordinate ascent algorithm that iteratively optimizes the task-specific rule sets and the prior distribution. Experiments using this algorithm show that transferring information from related tasks significantly reduces the amount of training data required to predict action effects in blocks-world domains.

Cite as

Ashwin Deshpande, Brian Milch, Luke S. Zettlemoyer, and Leslie Pack Kaelbling. Learning Probabilistic Relational Dynamics for Multiple Tasks. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{deshpande_et_al:DagSemProc.07161.4,
  author =	{Deshpande, Ashwin and Milch, Brian and Zettlemoyer, Luke S. and Kaelbling, Leslie Pack},
  title =	{{Learning Probabilistic Relational Dynamics for Multiple Tasks}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--10},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.4},
  URN =		{urn:nbn:de:0030-drops-13846},
  doi =		{10.4230/DagSemProc.07161.4},
  annote =	{Keywords: Hierarchical Bayesian models, transfer learning, multi-task learning, probabilistic planning rules}
}
Document
Logical Particle Filtering

Authors: Luke S. Zettlemoyer, Hanna M. Pasula, and Leslie Pack Kaelbling

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
In this paper, we consider the problem of filtering in relational hidden Markov models. We present a compact representation for such models and an associated logical particle filtering algorithm. Each particle contains a logical formula that describes a set of states. The algorithm updates the formulae as new observations are received. Since a single particle tracks many states, this filter can be more accurate than a traditional particle filter in high dimensional state spaces, as we demonstrate in experiments.

Cite as

Luke S. Zettlemoyer, Hanna M. Pasula, and Leslie Pack Kaelbling. Logical Particle Filtering. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{zettlemoyer_et_al:DagSemProc.07161.5,
  author =	{Zettlemoyer, Luke S. and Pasula, Hanna M. and Pack Kaelbling, Leslie},
  title =	{{Logical Particle Filtering}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--14},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.5},
  URN =		{urn:nbn:de:0030-drops-13792},
  doi =		{10.4230/DagSemProc.07161.5},
  annote =	{Keywords: Particle filter, logical hidden Markov model}
}
Document
Markov Logic in Infinite Domains

Authors: Pedro Domingos and Parag Singla

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
Markov logic combines logic and probability by attaching weights to first-order formulas, and viewing them as templates for features of Markov networks. Unfortunately, in its original formulation it does not have the full power of first-order logic, because it applies only to finite domains. Recently, we have extended Markov logic to infinite domains, by casting it in the framework of Gibbs measures. In this talk I will summarize our main results to date, including sufficient conditions for the existence and uniqueness of a Gibbs measure consistent with an infinite MLN, and properties of the set of consistent measures in the non-unique case. (Many important phenomena, like phase transitions, are modeled by non-unique MLNs.) Under the conditions for existence, we have extended to infinite domains the result in Richardson and Domingos (2006) that first-order logic is the limiting case of Markov logic when all weights tend to infinity. I will also discuss some fundamental limitations of Herbrand interpretations (and representations based on them) for probabilistic modeling of infinite domains, and how to get around them. Finally, I will discuss some of the surprising insights for learning and inference in large finite domains that result from considering the infinite limit.

Cite as

Pedro Domingos and Parag Singla. Markov Logic in Infinite Domains. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{domingos_et_al:DagSemProc.07161.6,
  author =	{Domingos, Pedro and Singla, Parag},
  title =	{{Markov Logic in Infinite Domains}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--16},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.6},
  URN =		{urn:nbn:de:0030-drops-13811},
  doi =		{10.4230/DagSemProc.07161.6},
  annote =	{Keywords: Markov logic networks, Gibbs measures, first-order logic, infinite probabilistic models, Markov networks}
}
Document
Model equivalence of PRISM programs

Authors: James Cussens

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
The problem of deciding the probability model equivalence of two PRISM programs is addressed. In the finite case this problem can be solved (albeit slowly) using techniques from emph{algebraic statistics}, specifically the computation of elimination ideals and Gr"{o}bner bases. A very brief introduction to algebraic statistics is given. Consideration is given to cases where shortcuts to proving/disproving model equivalence are available.

Cite as

James Cussens. Model equivalence of PRISM programs. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{cussens:DagSemProc.07161.7,
  author =	{Cussens, James},
  title =	{{Model equivalence of PRISM programs}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--21},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.7},
  URN =		{urn:nbn:de:0030-drops-13808},
  doi =		{10.4230/DagSemProc.07161.7},
  annote =	{Keywords: PRISM programs, model equivalence, model inclusion, algebraic statistics, algebraic geometry, ideals, varieties, Gr"\{o\}bner bases, polynomials}
}
Document
On classification, ranking, and probability estimation

Authors: Peter Flach and Edson Matsubara

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
Given a binary classification task, a ranker is an algorithm that can sort a set of instances from highest to lowest expectation that the instance is positive. In contrast to a classifier, a ranker does not output class predictions – although it can be turned into a classifier with help of an additional procedure to split the ranked list into two. A straightforward way to compute rankings is to train a scoring classifier to assign numerical scores to instances, for example the predicted odds that an instance is positive. However, rankings can be computed without scores, as we demonstrate in this paper. We propose a lexicographic ranker, LexRank , whose rankings are derived not from scores, but from a simple ranking of attribute values obtained from the training data. Although various metrics can be used, we show that by using the odds ratio to rank the attribute values we obtain a ranker that is conceptually close to the naive Bayes classifier, in the sense that for every instance of LexRank there exists an instance of naive Bayes that achieves the same ranking. However, the reverse is not true, which means that LexRank is more biased than naive Bayes. We systematically develop the relationships and differences between classification, ranking, and probability estimation, which leads to a novel connection between the Brier score and ROC curves. Combining LexRank with isotonic regression, which derives probability estimates from the ROC convex hull, results in the lexicographic probability estimator LexProb.

Cite as

Peter Flach and Edson Matsubara. On classification, ranking, and probability estimation. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{flach_et_al:DagSemProc.07161.8,
  author =	{Flach, Peter and Matsubara, Edson},
  title =	{{On classification, ranking, and probability estimation}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--10},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.8},
  URN =		{urn:nbn:de:0030-drops-13828},
  doi =		{10.4230/DagSemProc.07161.8},
  annote =	{Keywords: Ranking, probability estimation, ROC analysis, calibration}
}
Document
Structural Sampling for Statistical Software Testing

Authors: Nicolas Baskiotis and Michele Sebag

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
Structural Statistical Software Testing exploits the control flow graph of the program being tested to construct test cases. While test cases can easily be extracted from {em feasible paths} in the control flow graph, that is, paths which are actually exerted for some values of the program input, the feasible path region is a tiny fraction of the graph paths (less than $10^{-5}]$ for medium size programs). The S4T algorithm presented in this paper aims to address this limitation; as an Active Relational Learning Algorithm, it uses the few feasible paths initially available to sample new feasible paths. The difficulty comes from the non-Markovian nature of the feasible path concept, due to the long-range dependencies between the nodes in the control flow graph. Experimental validation on real-world and artificial problems demonstrates significant improvements compared to the state of the art.

Cite as

Nicolas Baskiotis and Michele Sebag. Structural Sampling for Statistical Software Testing. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{baskiotis_et_al:DagSemProc.07161.9,
  author =	{Baskiotis, Nicolas and Sebag, Michele},
  title =	{{Structural Sampling for Statistical Software Testing}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--13},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.9},
  URN =		{urn:nbn:de:0030-drops-13875},
  doi =		{10.4230/DagSemProc.07161.9},
  annote =	{Keywords: Active Relational Learning, Software Testing, Autonomic Computing, Parikh Maps}
}
Document
Variational Bayes via Propositionalization

Authors: Taisuke Sato, Yoshitaka Kameya, and Kenichi Kurihara

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
We propose a unified approach to VB (variational Bayes) in symbolic-statistical modeling via propositionalization. By propositionalization we mean, broadly, expressing and computing probabilistic models such as BNs (Bayesian networks) and PCFGs (probabilistic context free grammars) in terms of propositional logic that considers propositional variables as binary random variables. Our proposal is motivated by three observations. The first one is that PPC (propostionalized probability computation), i.e. probability computation formalized in a propositional setting, has turned out to be general and efficient when variable values are sparsely interdependent. Examples include (discrete) BNs, PCFGs and more generally PRISM which is a Turing complete logic programming language with EM learning ability we have been developing, and computes probabilities using graphically represented AND/OR boolean formulas. Efficiency of PPC is classically testified by the Inside-Outside algorithm in the case of PCFGs and by recent PPC approaches in the case of BNs such as the one by Darwiche et al. that exploits $0$ probability and CSI (context specific independence). Dechter et al. also revealed that PPC is a general computation scheme for BNs by their formulation of AND/OR search spaces. Second of all, while VB has been around for sometime as a practically effective approach to Bayesian modeling, it's use is still somewhat restricted to simple models such as BNs and HMMs (hidden Markov models) though its usefulness is established through a variety of applications from model selection to prediction. On the other hand it is already proved that VB can be extended to PCFGs and is efficiently implementable using dynamic programming. Note that PCFGs are just one class of PPC and much more general PPC is realized by PRISM. Accordingly if VB is extened to PRISM's PPC, we will obtain VB for general probabilistic models, far wider than BNs and PCFGs. The last observation is that once VB becomes available in PRISM, it saves us a lot of time and energy. First we do not have to derive a new VB algorithm from scratch for each model and implement it. All we have to do is just to write a probabilistic model at predicate level. The rest of work will be carried out automatically in a unified manner by the PRISM system as it happens in the case of EM learning. Deriving and implementing a VB algorithm is a tedious error-prone process, and ensuring its correctness would be difficult beyond PCFGs without formal semantics. PRISM augmented with VB will completely eliminate such needs and make it easy to explore and test new Bayesian models by helping the user cope with data sparseness and avoid over-fitting.

Cite as

Taisuke Sato, Yoshitaka Kameya, and Kenichi Kurihara. Variational Bayes via Propositionalization. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{sato_et_al:DagSemProc.07161.10,
  author =	{Sato, Taisuke and Kameya, Yoshitaka and Kurihara, Kenichi},
  title =	{{Variational Bayes via Propositionalization}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.10},
  URN =		{urn:nbn:de:0030-drops-13860},
  doi =		{10.4230/DagSemProc.07161.10},
  annote =	{Keywords: Variational Bayes, propositionalized probability computation, PRISM}
}
  • Refine by Author
  • 4 Kersting, Kristian
  • 2 Zettlemoyer, Luke S.
  • 1 Baskiotis, Nicolas
  • 1 Benedikt, Michael
  • 1 Cussens, James
  • Show More...

  • Refine by Classification
  • 2 Computing methodologies → Artificial intelligence
  • 2 Computing methodologies → Machine learning
  • 1 Software and its engineering

  • Refine by Keyword
  • 2 Artificial Intelligence
  • 2 Inductive Logic Programming
  • 2 Statistical Relational Learning
  • 1 Active Relational Learning
  • 1 Automated reasoning
  • Show More...

  • Refine by Type
  • 13 document

  • Refine by Publication Year
  • 10 2008
  • 2 2020
  • 1 2022

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail