eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Dagstuhl Seminar Proceedings
1862-4405
2006-01-19
5051
1
27
10.4230/DagSemProc.05051.1
article
05051 Abstracts Collection – Probabilistic, Logical and Relational Learning - Towards a Synthesis
De Raedt, Luc
Dietterich, Tom
Getoor, Lise
Muggleton, Stephen H.
From 30.01.05 to 04.02.05, the Dagstuhl Seminar 05051 ``Probabilistic, Logical and Relational Learning - Towards a Synthesis'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available.
https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol05051/DagSemProc.05051.1/DagSemProc.05051.1.pdf
Statistical relational learning
probabilistic logic learning
inductive logic programming
knowledge representation
machine learning
uncertainty in artificial intelligence
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Dagstuhl Seminar Proceedings
1862-4405
2006-01-19
5051
1
5
10.4230/DagSemProc.05051.2
article
05051 Executive Summary – Probabilistic, Logical and Relational Learning - Towards a Synthesis
De Raedt, Luc
Dietterich, Tom
Getoor, Lise
Muggleton, Stephen H.
A short report on the Dagstuhl seminar on Probabilistic, Logical and Relational Learning – Towards a Synthesis is given.
https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol05051/DagSemProc.05051.2/DagSemProc.05051.2.pdf
Reasoning about Uncertainty
Relational and Logical Represenations
Statistical Relational Learning
Inductive Lgoic Programmign
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Dagstuhl Seminar Proceedings
1862-4405
2006-01-19
5051
1
16
10.4230/DagSemProc.05051.3
article
An Architecture for Rational Agents
Lloyd, John W.
Sears, Tim D.
This paper is concerned with designing architectures for rational agents.
In the proposed architecture, agents have belief bases that are theories
in a multi-modal, higher-order logic.
Belief bases can be modified by a belief acquisition algorithm
that includes both symbolic, on-line learning and conventional knowledge base
update as special cases.
A method of partitioning the state space of the agent in two different ways
leads to a Bayesian network and associated influence diagram for selecting actions.
The resulting agent architecture exhibits a tight integration between logic,
probability, and learning.
This approach to agent architecture is illustrated by a user agent
that is able to personalise its behaviour according to the user's
interests and preferences.
https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol05051/DagSemProc.05051.3/DagSemProc.05051.3.pdf
Rational agent
agent architecture
belief base
Bayesian networks
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Dagstuhl Seminar Proceedings
1862-4405
2006-01-19
5051
1
6
10.4230/DagSemProc.05051.4
article
BLOG: Probabilistic Models with Unknown Objects
Milch, Brian
Marthi, Bhaskara
Russell, Stuart
Sontag, David
Ong, Daniel L.
Kolobov, Andrey
We introduce BLOG, a formal language for defining probability models with unknown objects and identity uncertainty. A BLOG model describes a generative process in which some steps add objects to the world, and others determine attributes and relations on these objects. Subject to certain acyclicity constraints, a BLOG model specifies a unique probability distribution over first-order model structures that can contain varying and unbounded numbers of objects. Furthermore, inference algorithms exist for a large class of BLOG models.
https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol05051/DagSemProc.05051.4/DagSemProc.05051.4.pdf
Knowledge representation
probability
first-order logic
identity uncertainty
unknown objects
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Dagstuhl Seminar Proceedings
1862-4405
2006-01-19
5051
1
10
10.4230/DagSemProc.05051.5
article
Combining Bayesian Networks with Higher-Order Data Representations
Gyftodimos, Elias
Flach, Peter A.
This paper introduces Higher-Order Bayesian Networks,
a probabilistic reasoning formalism which combines the efficient
reasoning mechanisms of Bayesian Networks with the expressive
power of higher-order logics.
We discuss how the proposed graphical model is used in order to define
a probability distribution semantics over particular families of
higher-order terms.
We give an example of the application of our method on the Mutagenesis
domain, a popular dataset from the Inductive Logic Programming
community, showing how we employ probabilistic inference and model
learning for the construction of a probabilistic classifier based on
Higher-Order Bayesian Networks.
https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol05051/DagSemProc.05051.5/DagSemProc.05051.5.pdf
Probabilistic reasoning
graphical models
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Dagstuhl Seminar Proceedings
1862-4405
2006-01-19
5051
1
8
10.4230/DagSemProc.05051.6
article
Exploiting independence for branch operations in Bayesian learning of C&RTs
Angelopoulos, Nicos
Cussens, James
In this paper we extend a methodology for Bayesian learning via MCMC,
with the ability to grow arbitrarily long branches in C&RT
models. We are able to do so by exploiting independence in the
model construction process. The ability to grow branches rather
than single nodes has been noted as desirable in the literature.
The most singular feature of the underline methodology used here
in comparison to other approaches is the coupling of the prior
and the proposal. The main contribution of this paper is to show
how taking advantage of independence in the coupled process, can allow
branch growing and swapping for proposal models.
https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol05051/DagSemProc.05051.6/DagSemProc.05051.6.pdf
Bayesian machine learning
classification and regression trees
stochastic logic programs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Dagstuhl Seminar Proceedings
1862-4405
2006-01-19
5051
1
16
10.4230/DagSemProc.05051.7
article
Importance Sampling on Relational Bayesian Networks
Jaeger, Manfred
We present techniques for importance sampling from distributions defined by
Relational Bayesian Networks. The methods operate directly on the abstract
representation language, and therefore can be applied in situations where sampling
from a standard Bayesian Network representation is infeasible. We describe
experimental results from using standard, adaptive and backward sampling
strategies. Furthermore, we use in our experiments a model that illustrates
a fully general way of translating the recent framework of Markov Logic Networks
into Relational Bayesian Networks.
https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol05051/DagSemProc.05051.7/DagSemProc.05051.7.pdf
Relational models
Importance Sampling
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Dagstuhl Seminar Proceedings
1862-4405
2006-01-19
5051
1
20
10.4230/DagSemProc.05051.8
article
Kernels on Prolog Proof Trees:Statistical Learning in the ILP Setting
Passerini, Andrea
Frasconi, Paolo
De Raedt, Luc
An example-trace is a sequence of steps taken by a program on
a given example input. Different approaches exist in order to
exploit example-traces for learning, all explicitly inferring a
target program from positive and negative traces.
We generalize such idea by developing similarity measures betweeen traces
in order to learn to discriminate between positive and
negative ones. This allows to combine the expressiveness of
inductive logic programming in representing knowledge to the statistical
properties of kernel machines. Logic programs will be used to generate
proofs of given visitor programs which exploit the available background
knowledge, while kernel machines will be employed to learn from such proofs.
https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol05051/DagSemProc.05051.8/DagSemProc.05051.8.pdf
Proof Trees
Logic Kernels
Learning from Traces
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Dagstuhl Seminar Proceedings
1862-4405
2006-01-19
5051
1
6
10.4230/DagSemProc.05051.9
article
Learning through failure
Sato, Taisuke
Kameya, Yoshitaka
PRISM, a symbolic-statistical modeling language
we have been developing since '97, recently
incorporated a program transformation technique
to handle failure in generative modeling.
I'll show this feature opens a way to
new breeds of symbolic models, including
EM learning from negative observations,
constrained HMMs and finite PCFGs.
https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol05051/DagSemProc.05051.9/DagSemProc.05051.9.pdf
Program transformation
failure
generative modeling
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Dagstuhl Seminar Proceedings
1862-4405
2006-01-19
5051
1
14
10.4230/DagSemProc.05051.10
article
Leveraging relational autocorrelation with latent group models
Neville, Jennifer
Jensen, David
The presence of autocorrelation provides strong motivation for using relational techniques for learning and inference. Autocorrelation is a statistical dependency between the values of the same variable on related entities and is a nearly ubiquitous characteristic of relational data sets. Recent research has explored the use of collective inference techniques to exploit this phenomenon. These techniques achieve significant performance gains by modeling observed correlations among class labels of related instances, but the models fail to capture a frequent cause of autocorrelation---the presence of underlying groups that influence the attributes on a set of entities. We propose a latent group model (LGM) for relational data, which discovers and exploits the hidden structures responsible for the observed autocorrelation among class labels. Modeling the latent group structure improves model performance, increases inference efficiency, and enhances our understanding of the datasets. We evaluate performance on three relational classification tasks and show that LGM outperforms models that ignore latent group structure when there is little known information with which to seed inference.
https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol05051/DagSemProc.05051.10/DagSemProc.05051.10.pdf
Statistical relational learning
probabilistic relational models
latent variable models
autocorrelation
collective inference
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Dagstuhl Seminar Proceedings
1862-4405
2006-01-19
5051
1
6
10.4230/DagSemProc.05051.11
article
Multi-View Learning and Link Farm Discovery
Scheffer, Tobias
The first part of this abstract focuses on estimation of mixture models for problems in which multiple views of the instances are available. Examples of this setting include clustering web pages or research papers that have intrinsic (text) and extrinsic (references) attributes. Mixture model estimation is a key problem for both semi-supervised and unsupervised learning. An appropriate optimization criterion quantifies the likelihood and the consensus among models in the individual views; maximizing this consensus minimizes a bound on the risk of assigning an instance to an incorrect mixture component. An EM algorithm maximizes this criterion. The second part of this abstract focuses on the problem of identifying link spam. Search engine optimizers inflate the page rank of a target site by spinning an artificial web for the sole purpose of providing inbound links to the target. Discriminating natural from artificial web sites is a difficult multi-view problem.
https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol05051/DagSemProc.05051.11/DagSemProc.05051.11.pdf
Multi-view learning