Dagstuhl Seminar Proceedings, Volume 5051



Publication Details

  • published at: 2006-01-19
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik

Access Numbers

Documents

No documents found matching your filter selection.
Document
05051 Abstracts Collection – Probabilistic, Logical and Relational Learning - Towards a Synthesis

Authors: Luc De Raedt, Tom Dietterich, Lise Getoor, and Stephen H. Muggleton


Abstract
From 30.01.05 to 04.02.05, the Dagstuhl Seminar 05051 ``Probabilistic, Logical and Relational Learning - Towards a Synthesis'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Luc De Raedt, Tom Dietterich, Lise Getoor, and Stephen H. Muggleton. 05051 Abstracts Collection – Probabilistic, Logical and Relational Learning - Towards a Synthesis. In Probabilistic, Logical and Relational Learning - Towards a Synthesis. Dagstuhl Seminar Proceedings, Volume 5051, pp. 1-27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{deraedt_et_al:DagSemProc.05051.1,
  author =	{De Raedt, Luc and Dietterich, Tom and Getoor, Lise and Muggleton, Stephen H.},
  title =	{{05051 Abstracts Collection – Probabilistic, Logical and Relational Learning - Towards a Synthesis}},
  booktitle =	{Probabilistic, Logical and Relational Learning - Towards a Synthesis},
  pages =	{1--27},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{5051},
  editor =	{Luc De Raedt and Thomas Dietterich and Lise Getoor and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.05051.1},
  URN =		{urn:nbn:de:0030-drops-4303},
  doi =		{10.4230/DagSemProc.05051.1},
  annote =	{Keywords: Statistical relational learning, probabilistic logic learning, inductive logic programming, knowledge representation, machine learning, uncertainty in artificial intelligence}
}
Document
05051 Executive Summary – Probabilistic, Logical and Relational Learning - Towards a Synthesis

Authors: Luc De Raedt, Tom Dietterich, Lise Getoor, and Stephen H. Muggleton


Abstract
A short report on the Dagstuhl seminar on Probabilistic, Logical and Relational Learning – Towards a Synthesis is given.

Cite as

Luc De Raedt, Tom Dietterich, Lise Getoor, and Stephen H. Muggleton. 05051 Executive Summary – Probabilistic, Logical and Relational Learning - Towards a Synthesis. In Probabilistic, Logical and Relational Learning - Towards a Synthesis. Dagstuhl Seminar Proceedings, Volume 5051, pp. 1-5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{deraedt_et_al:DagSemProc.05051.2,
  author =	{De Raedt, Luc and Dietterich, Tom and Getoor, Lise and Muggleton, Stephen H.},
  title =	{{05051 Executive Summary – Probabilistic, Logical and Relational Learning - Towards a Synthesis}},
  booktitle =	{Probabilistic, Logical and Relational Learning - Towards a Synthesis},
  pages =	{1--5},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{5051},
  editor =	{Luc De Raedt and Thomas Dietterich and Lise Getoor and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.05051.2},
  URN =		{urn:nbn:de:0030-drops-4121},
  doi =		{10.4230/DagSemProc.05051.2},
  annote =	{Keywords: Reasoning about Uncertainty, Relational and Logical Represenations, Statistical Relational Learning, Inductive Lgoic Programmign}
}
Document
An Architecture for Rational Agents

Authors: John W. Lloyd and Tim D. Sears


Abstract
This paper is concerned with designing architectures for rational agents. In the proposed architecture, agents have belief bases that are theories in a multi-modal, higher-order logic. Belief bases can be modified by a belief acquisition algorithm that includes both symbolic, on-line learning and conventional knowledge base update as special cases. A method of partitioning the state space of the agent in two different ways leads to a Bayesian network and associated influence diagram for selecting actions. The resulting agent architecture exhibits a tight integration between logic, probability, and learning. This approach to agent architecture is illustrated by a user agent that is able to personalise its behaviour according to the user's interests and preferences.

Cite as

John W. Lloyd and Tim D. Sears. An Architecture for Rational Agents. In Probabilistic, Logical and Relational Learning - Towards a Synthesis. Dagstuhl Seminar Proceedings, Volume 5051, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{lloyd_et_al:DagSemProc.05051.3,
  author =	{Lloyd, John W. and Sears, Tim D.},
  title =	{{An Architecture for Rational Agents}},
  booktitle =	{Probabilistic, Logical and Relational Learning - Towards a Synthesis},
  pages =	{1--16},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{5051},
  editor =	{Luc De Raedt and Thomas Dietterich and Lise Getoor and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.05051.3},
  URN =		{urn:nbn:de:0030-drops-4192},
  doi =		{10.4230/DagSemProc.05051.3},
  annote =	{Keywords: Rational agent, agent architecture, belief base, Bayesian networks}
}
Document
BLOG: Probabilistic Models with Unknown Objects

Authors: Brian Milch, Bhaskara Marthi, Stuart Russell, David Sontag, Daniel L. Ong, and Andrey Kolobov


Abstract
We introduce BLOG, a formal language for defining probability models with unknown objects and identity uncertainty. A BLOG model describes a generative process in which some steps add objects to the world, and others determine attributes and relations on these objects. Subject to certain acyclicity constraints, a BLOG model specifies a unique probability distribution over first-order model structures that can contain varying and unbounded numbers of objects. Furthermore, inference algorithms exist for a large class of BLOG models.

Cite as

Brian Milch, Bhaskara Marthi, Stuart Russell, David Sontag, Daniel L. Ong, and Andrey Kolobov. BLOG: Probabilistic Models with Unknown Objects. In Probabilistic, Logical and Relational Learning - Towards a Synthesis. Dagstuhl Seminar Proceedings, Volume 5051, pp. 1-6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{milch_et_al:DagSemProc.05051.4,
  author =	{Milch, Brian and Marthi, Bhaskara and Russell, Stuart and Sontag, David and Ong, Daniel L. and Kolobov, Andrey},
  title =	{{BLOG: Probabilistic Models with Unknown Objects}},
  booktitle =	{Probabilistic, Logical and Relational Learning - Towards a Synthesis},
  pages =	{1--6},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{5051},
  editor =	{Luc De Raedt and Thomas Dietterich and Lise Getoor and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.05051.4},
  URN =		{urn:nbn:de:0030-drops-4169},
  doi =		{10.4230/DagSemProc.05051.4},
  annote =	{Keywords: Knowledge representation, probability, first-order logic, identity uncertainty, unknown objects}
}
Document
Combining Bayesian Networks with Higher-Order Data Representations

Authors: Elias Gyftodimos and Peter A. Flach


Abstract
This paper introduces Higher-Order Bayesian Networks, a probabilistic reasoning formalism which combines the efficient reasoning mechanisms of Bayesian Networks with the expressive power of higher-order logics. We discuss how the proposed graphical model is used in order to define a probability distribution semantics over particular families of higher-order terms. We give an example of the application of our method on the Mutagenesis domain, a popular dataset from the Inductive Logic Programming community, showing how we employ probabilistic inference and model learning for the construction of a probabilistic classifier based on Higher-Order Bayesian Networks.

Cite as

Elias Gyftodimos and Peter A. Flach. Combining Bayesian Networks with Higher-Order Data Representations. In Probabilistic, Logical and Relational Learning - Towards a Synthesis. Dagstuhl Seminar Proceedings, Volume 5051, pp. 1-10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{gyftodimos_et_al:DagSemProc.05051.5,
  author =	{Gyftodimos, Elias and Flach, Peter A.},
  title =	{{Combining Bayesian Networks with Higher-Order Data Representations}},
  booktitle =	{Probabilistic, Logical and Relational Learning - Towards a Synthesis},
  pages =	{1--10},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{5051},
  editor =	{Luc De Raedt and Thomas Dietterich and Lise Getoor and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.05051.5},
  URN =		{urn:nbn:de:0030-drops-4139},
  doi =		{10.4230/DagSemProc.05051.5},
  annote =	{Keywords: Probabilistic reasoning, graphical models}
}
Document
Exploiting independence for branch operations in Bayesian learning of C&RTs

Authors: Nicos Angelopoulos and James Cussens


Abstract
In this paper we extend a methodology for Bayesian learning via MCMC, with the ability to grow arbitrarily long branches in C&RT models. We are able to do so by exploiting independence in the model construction process. The ability to grow branches rather than single nodes has been noted as desirable in the literature. The most singular feature of the underline methodology used here in comparison to other approaches is the coupling of the prior and the proposal. The main contribution of this paper is to show how taking advantage of independence in the coupled process, can allow branch growing and swapping for proposal models.

Cite as

Nicos Angelopoulos and James Cussens. Exploiting independence for branch operations in Bayesian learning of C&RTs. In Probabilistic, Logical and Relational Learning - Towards a Synthesis. Dagstuhl Seminar Proceedings, Volume 5051, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{angelopoulos_et_al:DagSemProc.05051.6,
  author =	{Angelopoulos, Nicos and Cussens, James},
  title =	{{Exploiting independence for branch operations in Bayesian learning of C\&RTs}},
  booktitle =	{Probabilistic, Logical and Relational Learning - Towards a Synthesis},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{5051},
  editor =	{Luc De Raedt and Thomas Dietterich and Lise Getoor and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.05051.6},
  URN =		{urn:nbn:de:0030-drops-4157},
  doi =		{10.4230/DagSemProc.05051.6},
  annote =	{Keywords: Bayesian machine learning, classification and regression trees, stochastic logic programs}
}
Document
Importance Sampling on Relational Bayesian Networks

Authors: Manfred Jaeger


Abstract
We present techniques for importance sampling from distributions defined by Relational Bayesian Networks. The methods operate directly on the abstract representation language, and therefore can be applied in situations where sampling from a standard Bayesian Network representation is infeasible. We describe experimental results from using standard, adaptive and backward sampling strategies. Furthermore, we use in our experiments a model that illustrates a fully general way of translating the recent framework of Markov Logic Networks into Relational Bayesian Networks.

Cite as

Manfred Jaeger. Importance Sampling on Relational Bayesian Networks. In Probabilistic, Logical and Relational Learning - Towards a Synthesis. Dagstuhl Seminar Proceedings, Volume 5051, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{jaeger:DagSemProc.05051.7,
  author =	{Jaeger, Manfred},
  title =	{{Importance Sampling on Relational Bayesian Networks}},
  booktitle =	{Probabilistic, Logical and Relational Learning - Towards a Synthesis},
  pages =	{1--16},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{5051},
  editor =	{Luc De Raedt and Thomas Dietterich and Lise Getoor and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.05051.7},
  URN =		{urn:nbn:de:0030-drops-4116},
  doi =		{10.4230/DagSemProc.05051.7},
  annote =	{Keywords: Relational models, Importance Sampling}
}
Document
Kernels on Prolog Proof Trees:Statistical Learning in the ILP Setting

Authors: Andrea Passerini, Paolo Frasconi, and Luc De Raedt


Abstract
An example-trace is a sequence of steps taken by a program on a given example input. Different approaches exist in order to exploit example-traces for learning, all explicitly inferring a target program from positive and negative traces. We generalize such idea by developing similarity measures betweeen traces in order to learn to discriminate between positive and negative ones. This allows to combine the expressiveness of inductive logic programming in representing knowledge to the statistical properties of kernel machines. Logic programs will be used to generate proofs of given visitor programs which exploit the available background knowledge, while kernel machines will be employed to learn from such proofs.

Cite as

Andrea Passerini, Paolo Frasconi, and Luc De Raedt. Kernels on Prolog Proof Trees:Statistical Learning in the ILP Setting. In Probabilistic, Logical and Relational Learning - Towards a Synthesis. Dagstuhl Seminar Proceedings, Volume 5051, pp. 1-20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{passerini_et_al:DagSemProc.05051.8,
  author =	{Passerini, Andrea and Frasconi, Paolo and De Raedt, Luc},
  title =	{{Kernels on Prolog Proof Trees:Statistical Learning in the ILP Setting}},
  booktitle =	{Probabilistic, Logical and Relational Learning - Towards a Synthesis},
  pages =	{1--20},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{5051},
  editor =	{Luc De Raedt and Thomas Dietterich and Lise Getoor and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.05051.8},
  URN =		{urn:nbn:de:0030-drops-4171},
  doi =		{10.4230/DagSemProc.05051.8},
  annote =	{Keywords: Proof Trees, Logic Kernels, Learning from Traces}
}
Document
Learning through failure

Authors: Taisuke Sato and Yoshitaka Kameya


Abstract
PRISM, a symbolic-statistical modeling language we have been developing since '97, recently incorporated a program transformation technique to handle failure in generative modeling. I'll show this feature opens a way to new breeds of symbolic models, including EM learning from negative observations, constrained HMMs and finite PCFGs.

Cite as

Taisuke Sato and Yoshitaka Kameya. Learning through failure. In Probabilistic, Logical and Relational Learning - Towards a Synthesis. Dagstuhl Seminar Proceedings, Volume 5051, pp. 1-6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{sato_et_al:DagSemProc.05051.9,
  author =	{Sato, Taisuke and Kameya, Yoshitaka},
  title =	{{Learning through failure}},
  booktitle =	{Probabilistic, Logical and Relational Learning - Towards a Synthesis},
  pages =	{1--6},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{5051},
  editor =	{Luc De Raedt and Thomas Dietterich and Lise Getoor and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.05051.9},
  URN =		{urn:nbn:de:0030-drops-4185},
  doi =		{10.4230/DagSemProc.05051.9},
  annote =	{Keywords: Program transformation, failure, generative modeling}
}
Document
Leveraging relational autocorrelation with latent group models

Authors: Jennifer Neville and David Jensen


Abstract
The presence of autocorrelation provides strong motivation for using relational techniques for learning and inference. Autocorrelation is a statistical dependency between the values of the same variable on related entities and is a nearly ubiquitous characteristic of relational data sets. Recent research has explored the use of collective inference techniques to exploit this phenomenon. These techniques achieve significant performance gains by modeling observed correlations among class labels of related instances, but the models fail to capture a frequent cause of autocorrelation---the presence of underlying groups that influence the attributes on a set of entities. We propose a latent group model (LGM) for relational data, which discovers and exploits the hidden structures responsible for the observed autocorrelation among class labels. Modeling the latent group structure improves model performance, increases inference efficiency, and enhances our understanding of the datasets. We evaluate performance on three relational classification tasks and show that LGM outperforms models that ignore latent group structure when there is little known information with which to seed inference.

Cite as

Jennifer Neville and David Jensen. Leveraging relational autocorrelation with latent group models. In Probabilistic, Logical and Relational Learning - Towards a Synthesis. Dagstuhl Seminar Proceedings, Volume 5051, pp. 1-14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{neville_et_al:DagSemProc.05051.10,
  author =	{Neville, Jennifer and Jensen, David},
  title =	{{Leveraging relational autocorrelation with latent group models}},
  booktitle =	{Probabilistic, Logical and Relational Learning - Towards a Synthesis},
  pages =	{1--14},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{5051},
  editor =	{Luc De Raedt and Thomas Dietterich and Lise Getoor and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.05051.10},
  URN =		{urn:nbn:de:0030-drops-4201},
  doi =		{10.4230/DagSemProc.05051.10},
  annote =	{Keywords: Statistical relational learning, probabilistic relational models, latent variable models, autocorrelation, collective inference}
}
Document
Multi-View Learning and Link Farm Discovery

Authors: Tobias Scheffer


Abstract
The first part of this abstract focuses on estimation of mixture models for problems in which multiple views of the instances are available. Examples of this setting include clustering web pages or research papers that have intrinsic (text) and extrinsic (references) attributes. Mixture model estimation is a key problem for both semi-supervised and unsupervised learning. An appropriate optimization criterion quantifies the likelihood and the consensus among models in the individual views; maximizing this consensus minimizes a bound on the risk of assigning an instance to an incorrect mixture component. An EM algorithm maximizes this criterion. The second part of this abstract focuses on the problem of identifying link spam. Search engine optimizers inflate the page rank of a target site by spinning an artificial web for the sole purpose of providing inbound links to the target. Discriminating natural from artificial web sites is a difficult multi-view problem.

Cite as

Tobias Scheffer. Multi-View Learning and Link Farm Discovery. In Probabilistic, Logical and Relational Learning - Towards a Synthesis. Dagstuhl Seminar Proceedings, Volume 5051, pp. 1-6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{scheffer:DagSemProc.05051.11,
  author =	{Scheffer, Tobias},
  title =	{{Multi-View Learning and Link Farm Discovery}},
  booktitle =	{Probabilistic, Logical and Relational Learning - Towards a Synthesis},
  pages =	{1--6},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{5051},
  editor =	{Luc De Raedt and Thomas Dietterich and Lise Getoor and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.05051.11},
  URN =		{urn:nbn:de:0030-drops-4146},
  doi =		{10.4230/DagSemProc.05051.11},
  annote =	{Keywords: Multi-view learning}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail