Dagstuhl Seminar Proceedings, Volume 7161



Publication Details

  • published at: 2008-03-06
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik

Access Numbers

Documents

No documents found matching your filter selection.
Document
07161 Abstracts Collection – Probabilistic, Logical and Relational Learning - A Further Synthesis

Authors: Luc De Raedt, Thomas Dietterich, Lise Getoor, Kristian Kersting, and Stephen H. Muggleton


Abstract
From April 14 – 20, 2007, the Dagstuhl Seminar 07161 ``Probabilistic, Logical and Relational Learning - A Further Synthesis'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Luc De Raedt, Thomas Dietterich, Lise Getoor, Kristian Kersting, and Stephen H. Muggleton. 07161 Abstracts Collection – Probabilistic, Logical and Relational Learning - A Further Synthesis. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{deraedt_et_al:DagSemProc.07161.1,
  author =	{De Raedt, Luc and Dietterich, Thomas and Getoor, Lise and Kersting, Kristian and Muggleton, Stephen H.},
  title =	{{07161 Abstracts Collection – Probabilistic, Logical and Relational Learning - A Further Synthesis}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--21},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.1},
  URN =		{urn:nbn:de:0030-drops-13885},
  doi =		{10.4230/DagSemProc.07161.1},
  annote =	{Keywords: Artificial Intelligence, Uncertainty in AI, Probabilistic Reasoning, Knowledge Representation, Logic Programming, Relational Learning, Inductive Logic Programming, Graphical Models, Statistical Relational Learning, First-Order Logical and Relational Probabilistic Languages}
}
Document
A general framework for unsupervised preocessing of structured data

Authors: Barbara Hammer, Alessio Micheli, and Alessandro Sperduti


Abstract
We propose a general framework for unsupervised recurrent and recursive networks. This proposal covers various popular approaches like standard self organizing maps (SOM), temporal Kohonen maps, resursive SOM, and SOM for structured data. We define Hebbian learning within this general framework. We show how approaches based on an energy function, like neural gas, can be transferred to this abstract framework so that proposals for new learning algorithms emerge.

Cite as

Barbara Hammer, Alessio Micheli, and Alessandro Sperduti. A general framework for unsupervised preocessing of structured data. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{hammer_et_al:DagSemProc.07161.2,
  author =	{Hammer, Barbara and Micheli, Alessio and Sperduti, Alessandro},
  title =	{{A general framework for unsupervised preocessing of structured data}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--6},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.2},
  URN =		{urn:nbn:de:0030-drops-13837},
  doi =		{10.4230/DagSemProc.07161.2},
  annote =	{Keywords: Relational clustering, median clustering, recursive SOM models, kernel SOM}
}
Document
Exploiting prior knowledge in Intelligent Assistants - Combining relational models with hierarchies

Authors: Sriraam Natarajan, Prasad Tadepalli, and Alan Fern


Abstract
Statitsical relational models have been successfully used to model static probabilistic relationships between the entities of the domain. In this talk, we illustrate their use in a dynamic decison-theoretic setting where the task is to assist a user by inferring his intentional structure and taking appropriate assistive actions. We show that the statistical relational models can be used to succintly express the system's prior knowledge about the user's goal-subgoal structure and tune it with experience. As the system is better able to predict the user's goals, it improves the effectiveness of its assistance. We show through experiments that both the hierarchical structure of the goals and the parameter sharing facilitated by relational models significantly improve the learning speed.

Cite as

Sriraam Natarajan, Prasad Tadepalli, and Alan Fern. Exploiting prior knowledge in Intelligent Assistants - Combining relational models with hierarchies. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{natarajan_et_al:DagSemProc.07161.3,
  author =	{Natarajan, Sriraam and Tadepalli, Prasad and Fern, Alan},
  title =	{{Exploiting prior knowledge in Intelligent Assistants - Combining relational models with hierarchies}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--2},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.3},
  URN =		{urn:nbn:de:0030-drops-13856},
  doi =		{10.4230/DagSemProc.07161.3},
  annote =	{Keywords: Statistical Relational Learning, Intelligent Assistants}
}
Document
Learning Probabilistic Relational Dynamics for Multiple Tasks

Authors: Ashwin Deshpande, Brian Milch, Luke S. Zettlemoyer, and Leslie Pack Kaelbling


Abstract
The ways in which an agent's actions affect the world can often be modeled compactly using a set of relational probabilistic planning rules. This extended abstract addresses the problem of learning such rule sets for multiple related tasks. We take a hierarchical Bayesian approach, in which the system learns a prior distribution over rule sets. We present a class of prior distributions parameterized by a rule set prototype that is stochastically modified to produce a task-specific rule set. We also describe a coordinate ascent algorithm that iteratively optimizes the task-specific rule sets and the prior distribution. Experiments using this algorithm show that transferring information from related tasks significantly reduces the amount of training data required to predict action effects in blocks-world domains.

Cite as

Ashwin Deshpande, Brian Milch, Luke S. Zettlemoyer, and Leslie Pack Kaelbling. Learning Probabilistic Relational Dynamics for Multiple Tasks. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{deshpande_et_al:DagSemProc.07161.4,
  author =	{Deshpande, Ashwin and Milch, Brian and Zettlemoyer, Luke S. and Kaelbling, Leslie Pack},
  title =	{{Learning Probabilistic Relational Dynamics for Multiple Tasks}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--10},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.4},
  URN =		{urn:nbn:de:0030-drops-13846},
  doi =		{10.4230/DagSemProc.07161.4},
  annote =	{Keywords: Hierarchical Bayesian models, transfer learning, multi-task learning, probabilistic planning rules}
}
Document
Logical Particle Filtering

Authors: Luke S. Zettlemoyer, Hanna M. Pasula, and Leslie Pack Kaelbling


Abstract
In this paper, we consider the problem of filtering in relational hidden Markov models. We present a compact representation for such models and an associated logical particle filtering algorithm. Each particle contains a logical formula that describes a set of states. The algorithm updates the formulae as new observations are received. Since a single particle tracks many states, this filter can be more accurate than a traditional particle filter in high dimensional state spaces, as we demonstrate in experiments.

Cite as

Luke S. Zettlemoyer, Hanna M. Pasula, and Leslie Pack Kaelbling. Logical Particle Filtering. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{zettlemoyer_et_al:DagSemProc.07161.5,
  author =	{Zettlemoyer, Luke S. and Pasula, Hanna M. and Pack Kaelbling, Leslie},
  title =	{{Logical Particle Filtering}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--14},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.5},
  URN =		{urn:nbn:de:0030-drops-13792},
  doi =		{10.4230/DagSemProc.07161.5},
  annote =	{Keywords: Particle filter, logical hidden Markov model}
}
Document
Markov Logic in Infinite Domains

Authors: Pedro Domingos and Parag Singla


Abstract
Markov logic combines logic and probability by attaching weights to first-order formulas, and viewing them as templates for features of Markov networks. Unfortunately, in its original formulation it does not have the full power of first-order logic, because it applies only to finite domains. Recently, we have extended Markov logic to infinite domains, by casting it in the framework of Gibbs measures. In this talk I will summarize our main results to date, including sufficient conditions for the existence and uniqueness of a Gibbs measure consistent with an infinite MLN, and properties of the set of consistent measures in the non-unique case. (Many important phenomena, like phase transitions, are modeled by non-unique MLNs.) Under the conditions for existence, we have extended to infinite domains the result in Richardson and Domingos (2006) that first-order logic is the limiting case of Markov logic when all weights tend to infinity. I will also discuss some fundamental limitations of Herbrand interpretations (and representations based on them) for probabilistic modeling of infinite domains, and how to get around them. Finally, I will discuss some of the surprising insights for learning and inference in large finite domains that result from considering the infinite limit.

Cite as

Pedro Domingos and Parag Singla. Markov Logic in Infinite Domains. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{domingos_et_al:DagSemProc.07161.6,
  author =	{Domingos, Pedro and Singla, Parag},
  title =	{{Markov Logic in Infinite Domains}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--16},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.6},
  URN =		{urn:nbn:de:0030-drops-13811},
  doi =		{10.4230/DagSemProc.07161.6},
  annote =	{Keywords: Markov logic networks, Gibbs measures, first-order logic, infinite probabilistic models, Markov networks}
}
Document
Model equivalence of PRISM programs

Authors: James Cussens


Abstract
The problem of deciding the probability model equivalence of two PRISM programs is addressed. In the finite case this problem can be solved (albeit slowly) using techniques from emph{algebraic statistics}, specifically the computation of elimination ideals and Gr"{o}bner bases. A very brief introduction to algebraic statistics is given. Consideration is given to cases where shortcuts to proving/disproving model equivalence are available.

Cite as

James Cussens. Model equivalence of PRISM programs. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{cussens:DagSemProc.07161.7,
  author =	{Cussens, James},
  title =	{{Model equivalence of PRISM programs}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--21},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.7},
  URN =		{urn:nbn:de:0030-drops-13808},
  doi =		{10.4230/DagSemProc.07161.7},
  annote =	{Keywords: PRISM programs, model equivalence, model inclusion, algebraic statistics, algebraic geometry, ideals, varieties, Gr"\{o\}bner bases, polynomials}
}
Document
On classification, ranking, and probability estimation

Authors: Peter Flach and Edson Matsubara


Abstract
Given a binary classification task, a ranker is an algorithm that can sort a set of instances from highest to lowest expectation that the instance is positive. In contrast to a classifier, a ranker does not output class predictions – although it can be turned into a classifier with help of an additional procedure to split the ranked list into two. A straightforward way to compute rankings is to train a scoring classifier to assign numerical scores to instances, for example the predicted odds that an instance is positive. However, rankings can be computed without scores, as we demonstrate in this paper. We propose a lexicographic ranker, LexRank , whose rankings are derived not from scores, but from a simple ranking of attribute values obtained from the training data. Although various metrics can be used, we show that by using the odds ratio to rank the attribute values we obtain a ranker that is conceptually close to the naive Bayes classifier, in the sense that for every instance of LexRank there exists an instance of naive Bayes that achieves the same ranking. However, the reverse is not true, which means that LexRank is more biased than naive Bayes. We systematically develop the relationships and differences between classification, ranking, and probability estimation, which leads to a novel connection between the Brier score and ROC curves. Combining LexRank with isotonic regression, which derives probability estimates from the ROC convex hull, results in the lexicographic probability estimator LexProb.

Cite as

Peter Flach and Edson Matsubara. On classification, ranking, and probability estimation. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{flach_et_al:DagSemProc.07161.8,
  author =	{Flach, Peter and Matsubara, Edson},
  title =	{{On classification, ranking, and probability estimation}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--10},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.8},
  URN =		{urn:nbn:de:0030-drops-13828},
  doi =		{10.4230/DagSemProc.07161.8},
  annote =	{Keywords: Ranking, probability estimation, ROC analysis, calibration}
}
Document
Structural Sampling for Statistical Software Testing

Authors: Nicolas Baskiotis and Michele Sebag


Abstract
Structural Statistical Software Testing exploits the control flow graph of the program being tested to construct test cases. While test cases can easily be extracted from {em feasible paths} in the control flow graph, that is, paths which are actually exerted for some values of the program input, the feasible path region is a tiny fraction of the graph paths (less than $10^{-5}]$ for medium size programs). The S4T algorithm presented in this paper aims to address this limitation; as an Active Relational Learning Algorithm, it uses the few feasible paths initially available to sample new feasible paths. The difficulty comes from the non-Markovian nature of the feasible path concept, due to the long-range dependencies between the nodes in the control flow graph. Experimental validation on real-world and artificial problems demonstrates significant improvements compared to the state of the art.

Cite as

Nicolas Baskiotis and Michele Sebag. Structural Sampling for Statistical Software Testing. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{baskiotis_et_al:DagSemProc.07161.9,
  author =	{Baskiotis, Nicolas and Sebag, Michele},
  title =	{{Structural Sampling for Statistical Software Testing}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--13},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.9},
  URN =		{urn:nbn:de:0030-drops-13875},
  doi =		{10.4230/DagSemProc.07161.9},
  annote =	{Keywords: Active Relational Learning, Software Testing, Autonomic Computing, Parikh Maps}
}
Document
Variational Bayes via Propositionalization

Authors: Taisuke Sato, Yoshitaka Kameya, and Kenichi Kurihara


Abstract
We propose a unified approach to VB (variational Bayes) in symbolic-statistical modeling via propositionalization. By propositionalization we mean, broadly, expressing and computing probabilistic models such as BNs (Bayesian networks) and PCFGs (probabilistic context free grammars) in terms of propositional logic that considers propositional variables as binary random variables. Our proposal is motivated by three observations. The first one is that PPC (propostionalized probability computation), i.e. probability computation formalized in a propositional setting, has turned out to be general and efficient when variable values are sparsely interdependent. Examples include (discrete) BNs, PCFGs and more generally PRISM which is a Turing complete logic programming language with EM learning ability we have been developing, and computes probabilities using graphically represented AND/OR boolean formulas. Efficiency of PPC is classically testified by the Inside-Outside algorithm in the case of PCFGs and by recent PPC approaches in the case of BNs such as the one by Darwiche et al. that exploits $0$ probability and CSI (context specific independence). Dechter et al. also revealed that PPC is a general computation scheme for BNs by their formulation of AND/OR search spaces. Second of all, while VB has been around for sometime as a practically effective approach to Bayesian modeling, it's use is still somewhat restricted to simple models such as BNs and HMMs (hidden Markov models) though its usefulness is established through a variety of applications from model selection to prediction. On the other hand it is already proved that VB can be extended to PCFGs and is efficiently implementable using dynamic programming. Note that PCFGs are just one class of PPC and much more general PPC is realized by PRISM. Accordingly if VB is extened to PRISM's PPC, we will obtain VB for general probabilistic models, far wider than BNs and PCFGs. The last observation is that once VB becomes available in PRISM, it saves us a lot of time and energy. First we do not have to derive a new VB algorithm from scratch for each model and implement it. All we have to do is just to write a probabilistic model at predicate level. The rest of work will be carried out automatically in a unified manner by the PRISM system as it happens in the case of EM learning. Deriving and implementing a VB algorithm is a tedious error-prone process, and ensuring its correctness would be difficult beyond PCFGs without formal semantics. PRISM augmented with VB will completely eliminate such needs and make it easy to explore and test new Bayesian models by helping the user cope with data sparseness and avoid over-fitting.

Cite as

Taisuke Sato, Yoshitaka Kameya, and Kenichi Kurihara. Variational Bayes via Propositionalization. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{sato_et_al:DagSemProc.07161.10,
  author =	{Sato, Taisuke and Kameya, Yoshitaka and Kurihara, Kenichi},
  title =	{{Variational Bayes via Propositionalization}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.10},
  URN =		{urn:nbn:de:0030-drops-13860},
  doi =		{10.4230/DagSemProc.07161.10},
  annote =	{Keywords: Variational Bayes, propositionalized probability computation, PRISM}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail