22 Search Results for "Getoor, Lise"


Document
Invited Talk
The Power of Relational Learning (Invited Talk)

Authors: Lise Getoor

Published in: LIPIcs, Volume 127, 22nd International Conference on Database Theory (ICDT 2019)


Abstract
We live in a richly interconnected world and, not surprisingly, we generate richly interconnected data. From smart cities to social media to financial networks to biological networks, data is relational. While database theory is built on strong relational foundations, the same is not true for machine learning. The majority of machine learning methods flatten data into a single table before performing any processing. Further, database theory is also built on a bedrock of declarative representations. The same is not true for machine learning, in particular deep learning, which results in black-box, uninterpretable and unexplainable models. In this talk, I will introduce the field of statistical relational learning, an alternative machine learning approach based on declarative relational representations paired with probabilistic models. I’ll describe our work on probabilistic soft logic, a probabilistic programming language that is ideally suited to richly connected, noisy data. Our recent results show that by building on state-of-the-art optimization methods in a distributed implementation, we can solve very large relational learning problems orders of magnitude faster than existing approaches.

Cite as

Lise Getoor. The Power of Relational Learning (Invited Talk). In 22nd International Conference on Database Theory (ICDT 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 127, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{getoor:LIPIcs.ICDT.2019.2,
  author =	{Getoor, Lise},
  title =	{{The Power of Relational Learning}},
  booktitle =	{22nd International Conference on Database Theory (ICDT 2019)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-101-6},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{127},
  editor =	{Barcelo, Pablo and Calautti, Marco},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICDT.2019.2},
  URN =		{urn:nbn:de:0030-drops-103048},
  doi =		{10.4230/LIPIcs.ICDT.2019.2},
  annote =	{Keywords: Machine learning, Probabilistic soft logic, Relational model}
}
Document
07161 Abstracts Collection – Probabilistic, Logical and Relational Learning - A Further Synthesis

Authors: Luc De Raedt, Thomas Dietterich, Lise Getoor, Kristian Kersting, and Stephen H. Muggleton

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
From April 14 – 20, 2007, the Dagstuhl Seminar 07161 ``Probabilistic, Logical and Relational Learning - A Further Synthesis'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Luc De Raedt, Thomas Dietterich, Lise Getoor, Kristian Kersting, and Stephen H. Muggleton. 07161 Abstracts Collection – Probabilistic, Logical and Relational Learning - A Further Synthesis. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{deraedt_et_al:DagSemProc.07161.1,
  author =	{De Raedt, Luc and Dietterich, Thomas and Getoor, Lise and Kersting, Kristian and Muggleton, Stephen H.},
  title =	{{07161 Abstracts Collection – Probabilistic, Logical and Relational Learning - A Further Synthesis}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--21},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.1},
  URN =		{urn:nbn:de:0030-drops-13885},
  doi =		{10.4230/DagSemProc.07161.1},
  annote =	{Keywords: Artificial Intelligence, Uncertainty in AI, Probabilistic Reasoning, Knowledge Representation, Logic Programming, Relational Learning, Inductive Logic Programming, Graphical Models, Statistical Relational Learning, First-Order Logical and Relational Probabilistic Languages}
}
Document
A general framework for unsupervised preocessing of structured data

Authors: Barbara Hammer, Alessio Micheli, and Alessandro Sperduti

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
We propose a general framework for unsupervised recurrent and recursive networks. This proposal covers various popular approaches like standard self organizing maps (SOM), temporal Kohonen maps, resursive SOM, and SOM for structured data. We define Hebbian learning within this general framework. We show how approaches based on an energy function, like neural gas, can be transferred to this abstract framework so that proposals for new learning algorithms emerge.

Cite as

Barbara Hammer, Alessio Micheli, and Alessandro Sperduti. A general framework for unsupervised preocessing of structured data. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{hammer_et_al:DagSemProc.07161.2,
  author =	{Hammer, Barbara and Micheli, Alessio and Sperduti, Alessandro},
  title =	{{A general framework for unsupervised preocessing of structured data}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--6},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.2},
  URN =		{urn:nbn:de:0030-drops-13837},
  doi =		{10.4230/DagSemProc.07161.2},
  annote =	{Keywords: Relational clustering, median clustering, recursive SOM models, kernel SOM}
}
Document
Exploiting prior knowledge in Intelligent Assistants - Combining relational models with hierarchies

Authors: Sriraam Natarajan, Prasad Tadepalli, and Alan Fern

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
Statitsical relational models have been successfully used to model static probabilistic relationships between the entities of the domain. In this talk, we illustrate their use in a dynamic decison-theoretic setting where the task is to assist a user by inferring his intentional structure and taking appropriate assistive actions. We show that the statistical relational models can be used to succintly express the system's prior knowledge about the user's goal-subgoal structure and tune it with experience. As the system is better able to predict the user's goals, it improves the effectiveness of its assistance. We show through experiments that both the hierarchical structure of the goals and the parameter sharing facilitated by relational models significantly improve the learning speed.

Cite as

Sriraam Natarajan, Prasad Tadepalli, and Alan Fern. Exploiting prior knowledge in Intelligent Assistants - Combining relational models with hierarchies. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{natarajan_et_al:DagSemProc.07161.3,
  author =	{Natarajan, Sriraam and Tadepalli, Prasad and Fern, Alan},
  title =	{{Exploiting prior knowledge in Intelligent Assistants - Combining relational models with hierarchies}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--2},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.3},
  URN =		{urn:nbn:de:0030-drops-13856},
  doi =		{10.4230/DagSemProc.07161.3},
  annote =	{Keywords: Statistical Relational Learning, Intelligent Assistants}
}
Document
Learning Probabilistic Relational Dynamics for Multiple Tasks

Authors: Ashwin Deshpande, Brian Milch, Luke S. Zettlemoyer, and Leslie Pack Kaelbling

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
The ways in which an agent's actions affect the world can often be modeled compactly using a set of relational probabilistic planning rules. This extended abstract addresses the problem of learning such rule sets for multiple related tasks. We take a hierarchical Bayesian approach, in which the system learns a prior distribution over rule sets. We present a class of prior distributions parameterized by a rule set prototype that is stochastically modified to produce a task-specific rule set. We also describe a coordinate ascent algorithm that iteratively optimizes the task-specific rule sets and the prior distribution. Experiments using this algorithm show that transferring information from related tasks significantly reduces the amount of training data required to predict action effects in blocks-world domains.

Cite as

Ashwin Deshpande, Brian Milch, Luke S. Zettlemoyer, and Leslie Pack Kaelbling. Learning Probabilistic Relational Dynamics for Multiple Tasks. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{deshpande_et_al:DagSemProc.07161.4,
  author =	{Deshpande, Ashwin and Milch, Brian and Zettlemoyer, Luke S. and Kaelbling, Leslie Pack},
  title =	{{Learning Probabilistic Relational Dynamics for Multiple Tasks}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--10},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.4},
  URN =		{urn:nbn:de:0030-drops-13846},
  doi =		{10.4230/DagSemProc.07161.4},
  annote =	{Keywords: Hierarchical Bayesian models, transfer learning, multi-task learning, probabilistic planning rules}
}
Document
Logical Particle Filtering

Authors: Luke S. Zettlemoyer, Hanna M. Pasula, and Leslie Pack Kaelbling

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
In this paper, we consider the problem of filtering in relational hidden Markov models. We present a compact representation for such models and an associated logical particle filtering algorithm. Each particle contains a logical formula that describes a set of states. The algorithm updates the formulae as new observations are received. Since a single particle tracks many states, this filter can be more accurate than a traditional particle filter in high dimensional state spaces, as we demonstrate in experiments.

Cite as

Luke S. Zettlemoyer, Hanna M. Pasula, and Leslie Pack Kaelbling. Logical Particle Filtering. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{zettlemoyer_et_al:DagSemProc.07161.5,
  author =	{Zettlemoyer, Luke S. and Pasula, Hanna M. and Pack Kaelbling, Leslie},
  title =	{{Logical Particle Filtering}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--14},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.5},
  URN =		{urn:nbn:de:0030-drops-13792},
  doi =		{10.4230/DagSemProc.07161.5},
  annote =	{Keywords: Particle filter, logical hidden Markov model}
}
Document
Markov Logic in Infinite Domains

Authors: Pedro Domingos and Parag Singla

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
Markov logic combines logic and probability by attaching weights to first-order formulas, and viewing them as templates for features of Markov networks. Unfortunately, in its original formulation it does not have the full power of first-order logic, because it applies only to finite domains. Recently, we have extended Markov logic to infinite domains, by casting it in the framework of Gibbs measures. In this talk I will summarize our main results to date, including sufficient conditions for the existence and uniqueness of a Gibbs measure consistent with an infinite MLN, and properties of the set of consistent measures in the non-unique case. (Many important phenomena, like phase transitions, are modeled by non-unique MLNs.) Under the conditions for existence, we have extended to infinite domains the result in Richardson and Domingos (2006) that first-order logic is the limiting case of Markov logic when all weights tend to infinity. I will also discuss some fundamental limitations of Herbrand interpretations (and representations based on them) for probabilistic modeling of infinite domains, and how to get around them. Finally, I will discuss some of the surprising insights for learning and inference in large finite domains that result from considering the infinite limit.

Cite as

Pedro Domingos and Parag Singla. Markov Logic in Infinite Domains. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{domingos_et_al:DagSemProc.07161.6,
  author =	{Domingos, Pedro and Singla, Parag},
  title =	{{Markov Logic in Infinite Domains}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--16},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.6},
  URN =		{urn:nbn:de:0030-drops-13811},
  doi =		{10.4230/DagSemProc.07161.6},
  annote =	{Keywords: Markov logic networks, Gibbs measures, first-order logic, infinite probabilistic models, Markov networks}
}
Document
Model equivalence of PRISM programs

Authors: James Cussens

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
The problem of deciding the probability model equivalence of two PRISM programs is addressed. In the finite case this problem can be solved (albeit slowly) using techniques from emph{algebraic statistics}, specifically the computation of elimination ideals and Gr"{o}bner bases. A very brief introduction to algebraic statistics is given. Consideration is given to cases where shortcuts to proving/disproving model equivalence are available.

Cite as

James Cussens. Model equivalence of PRISM programs. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{cussens:DagSemProc.07161.7,
  author =	{Cussens, James},
  title =	{{Model equivalence of PRISM programs}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--21},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.7},
  URN =		{urn:nbn:de:0030-drops-13808},
  doi =		{10.4230/DagSemProc.07161.7},
  annote =	{Keywords: PRISM programs, model equivalence, model inclusion, algebraic statistics, algebraic geometry, ideals, varieties, Gr"\{o\}bner bases, polynomials}
}
Document
On classification, ranking, and probability estimation

Authors: Peter Flach and Edson Matsubara

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
Given a binary classification task, a ranker is an algorithm that can sort a set of instances from highest to lowest expectation that the instance is positive. In contrast to a classifier, a ranker does not output class predictions – although it can be turned into a classifier with help of an additional procedure to split the ranked list into two. A straightforward way to compute rankings is to train a scoring classifier to assign numerical scores to instances, for example the predicted odds that an instance is positive. However, rankings can be computed without scores, as we demonstrate in this paper. We propose a lexicographic ranker, LexRank , whose rankings are derived not from scores, but from a simple ranking of attribute values obtained from the training data. Although various metrics can be used, we show that by using the odds ratio to rank the attribute values we obtain a ranker that is conceptually close to the naive Bayes classifier, in the sense that for every instance of LexRank there exists an instance of naive Bayes that achieves the same ranking. However, the reverse is not true, which means that LexRank is more biased than naive Bayes. We systematically develop the relationships and differences between classification, ranking, and probability estimation, which leads to a novel connection between the Brier score and ROC curves. Combining LexRank with isotonic regression, which derives probability estimates from the ROC convex hull, results in the lexicographic probability estimator LexProb.

Cite as

Peter Flach and Edson Matsubara. On classification, ranking, and probability estimation. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{flach_et_al:DagSemProc.07161.8,
  author =	{Flach, Peter and Matsubara, Edson},
  title =	{{On classification, ranking, and probability estimation}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--10},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.8},
  URN =		{urn:nbn:de:0030-drops-13828},
  doi =		{10.4230/DagSemProc.07161.8},
  annote =	{Keywords: Ranking, probability estimation, ROC analysis, calibration}
}
Document
Structural Sampling for Statistical Software Testing

Authors: Nicolas Baskiotis and Michele Sebag

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
Structural Statistical Software Testing exploits the control flow graph of the program being tested to construct test cases. While test cases can easily be extracted from {em feasible paths} in the control flow graph, that is, paths which are actually exerted for some values of the program input, the feasible path region is a tiny fraction of the graph paths (less than $10^{-5}]$ for medium size programs). The S4T algorithm presented in this paper aims to address this limitation; as an Active Relational Learning Algorithm, it uses the few feasible paths initially available to sample new feasible paths. The difficulty comes from the non-Markovian nature of the feasible path concept, due to the long-range dependencies between the nodes in the control flow graph. Experimental validation on real-world and artificial problems demonstrates significant improvements compared to the state of the art.

Cite as

Nicolas Baskiotis and Michele Sebag. Structural Sampling for Statistical Software Testing. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{baskiotis_et_al:DagSemProc.07161.9,
  author =	{Baskiotis, Nicolas and Sebag, Michele},
  title =	{{Structural Sampling for Statistical Software Testing}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--13},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.9},
  URN =		{urn:nbn:de:0030-drops-13875},
  doi =		{10.4230/DagSemProc.07161.9},
  annote =	{Keywords: Active Relational Learning, Software Testing, Autonomic Computing, Parikh Maps}
}
Document
Variational Bayes via Propositionalization

Authors: Taisuke Sato, Yoshitaka Kameya, and Kenichi Kurihara

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
We propose a unified approach to VB (variational Bayes) in symbolic-statistical modeling via propositionalization. By propositionalization we mean, broadly, expressing and computing probabilistic models such as BNs (Bayesian networks) and PCFGs (probabilistic context free grammars) in terms of propositional logic that considers propositional variables as binary random variables. Our proposal is motivated by three observations. The first one is that PPC (propostionalized probability computation), i.e. probability computation formalized in a propositional setting, has turned out to be general and efficient when variable values are sparsely interdependent. Examples include (discrete) BNs, PCFGs and more generally PRISM which is a Turing complete logic programming language with EM learning ability we have been developing, and computes probabilities using graphically represented AND/OR boolean formulas. Efficiency of PPC is classically testified by the Inside-Outside algorithm in the case of PCFGs and by recent PPC approaches in the case of BNs such as the one by Darwiche et al. that exploits $0$ probability and CSI (context specific independence). Dechter et al. also revealed that PPC is a general computation scheme for BNs by their formulation of AND/OR search spaces. Second of all, while VB has been around for sometime as a practically effective approach to Bayesian modeling, it's use is still somewhat restricted to simple models such as BNs and HMMs (hidden Markov models) though its usefulness is established through a variety of applications from model selection to prediction. On the other hand it is already proved that VB can be extended to PCFGs and is efficiently implementable using dynamic programming. Note that PCFGs are just one class of PPC and much more general PPC is realized by PRISM. Accordingly if VB is extened to PRISM's PPC, we will obtain VB for general probabilistic models, far wider than BNs and PCFGs. The last observation is that once VB becomes available in PRISM, it saves us a lot of time and energy. First we do not have to derive a new VB algorithm from scratch for each model and implement it. All we have to do is just to write a probabilistic model at predicate level. The rest of work will be carried out automatically in a unified manner by the PRISM system as it happens in the case of EM learning. Deriving and implementing a VB algorithm is a tedious error-prone process, and ensuring its correctness would be difficult beyond PCFGs without formal semantics. PRISM augmented with VB will completely eliminate such needs and make it easy to explore and test new Bayesian models by helping the user cope with data sparseness and avoid over-fitting.

Cite as

Taisuke Sato, Yoshitaka Kameya, and Kenichi Kurihara. Variational Bayes via Propositionalization. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{sato_et_al:DagSemProc.07161.10,
  author =	{Sato, Taisuke and Kameya, Yoshitaka and Kurihara, Kenichi},
  title =	{{Variational Bayes via Propositionalization}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.10},
  URN =		{urn:nbn:de:0030-drops-13860},
  doi =		{10.4230/DagSemProc.07161.10},
  annote =	{Keywords: Variational Bayes, propositionalized probability computation, PRISM}
}
Document
05051 Abstracts Collection – Probabilistic, Logical and Relational Learning - Towards a Synthesis

Authors: Luc De Raedt, Tom Dietterich, Lise Getoor, and Stephen H. Muggleton

Published in: Dagstuhl Seminar Proceedings, Volume 5051, Probabilistic, Logical and Relational Learning - Towards a Synthesis (2006)


Abstract
From 30.01.05 to 04.02.05, the Dagstuhl Seminar 05051 ``Probabilistic, Logical and Relational Learning - Towards a Synthesis'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Luc De Raedt, Tom Dietterich, Lise Getoor, and Stephen H. Muggleton. 05051 Abstracts Collection – Probabilistic, Logical and Relational Learning - Towards a Synthesis. In Probabilistic, Logical and Relational Learning - Towards a Synthesis. Dagstuhl Seminar Proceedings, Volume 5051, pp. 1-27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{deraedt_et_al:DagSemProc.05051.1,
  author =	{De Raedt, Luc and Dietterich, Tom and Getoor, Lise and Muggleton, Stephen H.},
  title =	{{05051 Abstracts Collection – Probabilistic, Logical and Relational Learning - Towards a Synthesis}},
  booktitle =	{Probabilistic, Logical and Relational Learning - Towards a Synthesis},
  pages =	{1--27},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{5051},
  editor =	{Luc De Raedt and Thomas Dietterich and Lise Getoor and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.05051.1},
  URN =		{urn:nbn:de:0030-drops-4303},
  doi =		{10.4230/DagSemProc.05051.1},
  annote =	{Keywords: Statistical relational learning, probabilistic logic learning, inductive logic programming, knowledge representation, machine learning, uncertainty in artificial intelligence}
}
Document
Exploiting independence for branch operations in Bayesian learning of C&RTs

Authors: Nicos Angelopoulos and James Cussens

Published in: Dagstuhl Seminar Proceedings, Volume 5051, Probabilistic, Logical and Relational Learning - Towards a Synthesis (2006)


Abstract
In this paper we extend a methodology for Bayesian learning via MCMC, with the ability to grow arbitrarily long branches in C&RT models. We are able to do so by exploiting independence in the model construction process. The ability to grow branches rather than single nodes has been noted as desirable in the literature. The most singular feature of the underline methodology used here in comparison to other approaches is the coupling of the prior and the proposal. The main contribution of this paper is to show how taking advantage of independence in the coupled process, can allow branch growing and swapping for proposal models.

Cite as

Nicos Angelopoulos and James Cussens. Exploiting independence for branch operations in Bayesian learning of C&RTs. In Probabilistic, Logical and Relational Learning - Towards a Synthesis. Dagstuhl Seminar Proceedings, Volume 5051, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{angelopoulos_et_al:DagSemProc.05051.6,
  author =	{Angelopoulos, Nicos and Cussens, James},
  title =	{{Exploiting independence for branch operations in Bayesian learning of C\&RTs}},
  booktitle =	{Probabilistic, Logical and Relational Learning - Towards a Synthesis},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{5051},
  editor =	{Luc De Raedt and Thomas Dietterich and Lise Getoor and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.05051.6},
  URN =		{urn:nbn:de:0030-drops-4157},
  doi =		{10.4230/DagSemProc.05051.6},
  annote =	{Keywords: Bayesian machine learning, classification and regression trees, stochastic logic programs}
}
Document
05051 Executive Summary – Probabilistic, Logical and Relational Learning - Towards a Synthesis

Authors: Luc De Raedt, Tom Dietterich, Lise Getoor, and Stephen H. Muggleton

Published in: Dagstuhl Seminar Proceedings, Volume 5051, Probabilistic, Logical and Relational Learning - Towards a Synthesis (2006)


Abstract
A short report on the Dagstuhl seminar on Probabilistic, Logical and Relational Learning – Towards a Synthesis is given.

Cite as

Luc De Raedt, Tom Dietterich, Lise Getoor, and Stephen H. Muggleton. 05051 Executive Summary – Probabilistic, Logical and Relational Learning - Towards a Synthesis. In Probabilistic, Logical and Relational Learning - Towards a Synthesis. Dagstuhl Seminar Proceedings, Volume 5051, pp. 1-5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{deraedt_et_al:DagSemProc.05051.2,
  author =	{De Raedt, Luc and Dietterich, Tom and Getoor, Lise and Muggleton, Stephen H.},
  title =	{{05051 Executive Summary – Probabilistic, Logical and Relational Learning - Towards a Synthesis}},
  booktitle =	{Probabilistic, Logical and Relational Learning - Towards a Synthesis},
  pages =	{1--5},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{5051},
  editor =	{Luc De Raedt and Thomas Dietterich and Lise Getoor and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.05051.2},
  URN =		{urn:nbn:de:0030-drops-4121},
  doi =		{10.4230/DagSemProc.05051.2},
  annote =	{Keywords: Reasoning about Uncertainty, Relational and Logical Represenations, Statistical Relational Learning, Inductive Lgoic Programmign}
}
Document
An Architecture for Rational Agents

Authors: John W. Lloyd and Tim D. Sears

Published in: Dagstuhl Seminar Proceedings, Volume 5051, Probabilistic, Logical and Relational Learning - Towards a Synthesis (2006)


Abstract
This paper is concerned with designing architectures for rational agents. In the proposed architecture, agents have belief bases that are theories in a multi-modal, higher-order logic. Belief bases can be modified by a belief acquisition algorithm that includes both symbolic, on-line learning and conventional knowledge base update as special cases. A method of partitioning the state space of the agent in two different ways leads to a Bayesian network and associated influence diagram for selecting actions. The resulting agent architecture exhibits a tight integration between logic, probability, and learning. This approach to agent architecture is illustrated by a user agent that is able to personalise its behaviour according to the user's interests and preferences.

Cite as

John W. Lloyd and Tim D. Sears. An Architecture for Rational Agents. In Probabilistic, Logical and Relational Learning - Towards a Synthesis. Dagstuhl Seminar Proceedings, Volume 5051, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{lloyd_et_al:DagSemProc.05051.3,
  author =	{Lloyd, John W. and Sears, Tim D.},
  title =	{{An Architecture for Rational Agents}},
  booktitle =	{Probabilistic, Logical and Relational Learning - Towards a Synthesis},
  pages =	{1--16},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{5051},
  editor =	{Luc De Raedt and Thomas Dietterich and Lise Getoor and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.05051.3},
  URN =		{urn:nbn:de:0030-drops-4192},
  doi =		{10.4230/DagSemProc.05051.3},
  annote =	{Keywords: Rational agent, agent architecture, belief base, Bayesian networks}
}
  • Refine by Author
  • 4 De Raedt, Luc
  • 4 Getoor, Lise
  • 3 Muggleton, Stephen H.
  • 2 Cussens, James
  • 2 Dietterich, Tom
  • Show More...

  • Refine by Classification
  • 1 Computing methodologies → Machine learning
  • 1 Information systems → Data management systems

  • Refine by Keyword
  • 3 Statistical Relational Learning
  • 2 Statistical relational learning
  • 2 first-order logic
  • 1 Active Relational Learning
  • 1 Artificial Intelligence
  • Show More...

  • Refine by Type
  • 22 document

  • Refine by Publication Year
  • 11 2006
  • 10 2008
  • 1 2019

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail