6 Search Results for "Toussaint, Marc"


Document
10302 Abstracts Collection – Learning paradigms in dynamic environments

Authors: Barbara Hammer, Pascal Hitzler, Wolfgang Maass, and Marc Toussaint

Published in: Dagstuhl Seminar Proceedings, Volume 10302, Learning paradigms in dynamic environments (2010)


Abstract
From 25.07. to 30.07.2010, the Dagstuhl Seminar 10302 ``Learning paradigms in dynamic environments '' was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Barbara Hammer, Pascal Hitzler, Wolfgang Maass, and Marc Toussaint. 10302 Abstracts Collection – Learning paradigms in dynamic environments. In Learning paradigms in dynamic environments. Dagstuhl Seminar Proceedings, Volume 10302, pp. 1-15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{hammer_et_al:DagSemProc.10302.1,
  author =	{Hammer, Barbara and Hitzler, Pascal and Maass, Wolfgang and Toussaint, Marc},
  title =	{{10302 Abstracts Collection – Learning paradigms in dynamic environments}},
  booktitle =	{Learning paradigms in dynamic environments},
  pages =	{1--15},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10302},
  editor =	{Barbara Hammer and Pascal Hitzler and Wolfgang Maass and Marc Toussaint},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.10302.1},
  URN =		{urn:nbn:de:0030-drops-28048},
  doi =		{10.4230/DagSemProc.10302.1},
  annote =	{Keywords: Recurrent neural networks, Dynamic systems, Speech processing, Neurobiology, Neural-symbolic integration, Autonomous learning}
}
Document
10302 Summary – Learning paradigms in dynamic environments

Authors: Barbara Hammer, Pascal Hitzler, Wolfgang Maass, and Marc Toussaint

Published in: Dagstuhl Seminar Proceedings, Volume 10302, Learning paradigms in dynamic environments (2010)


Abstract
The seminar centered around problems which arise in the context of machine learning in dynamic environments. Particular emphasis was put on a couple of specific questions in this context: how to represent and abstract knowledge appropriately to shape the problem of learning in a partially unknown and complex environment and how to combine statistical inference and abstract symbolic representations; how to infer from few data and how to deal with non i.i.d. data, model revision and life-long learning; how to come up with efficient strategies to control realistic environments for which exploration is costly, the dimensionality is high and data are sparse; how to deal with very large settings; and how to apply these models in challenging application areas such as robotics, computer vision, or the web.

Cite as

Barbara Hammer, Pascal Hitzler, Wolfgang Maass, and Marc Toussaint. 10302 Summary – Learning paradigms in dynamic environments. In Learning paradigms in dynamic environments. Dagstuhl Seminar Proceedings, Volume 10302, pp. 1-4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{hammer_et_al:DagSemProc.10302.2,
  author =	{Hammer, Barbara and Hitzler, Pascal and Maass, Wolfgang and Toussaint, Marc},
  title =	{{10302 Summary – Learning paradigms in dynamic environments}},
  booktitle =	{Learning paradigms in dynamic environments},
  pages =	{1--4},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10302},
  editor =	{Barbara Hammer and Pascal Hitzler and Wolfgang Maass and Marc Toussaint},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.10302.2},
  URN =		{urn:nbn:de:0030-drops-28027},
  doi =		{10.4230/DagSemProc.10302.2},
  annote =	{Keywords: Summary}
}
Document
Neurons and Symbols: A Manifesto

Authors: Artur S. d'Avila Garcez

Published in: Dagstuhl Seminar Proceedings, Volume 10302, Learning paradigms in dynamic environments (2010)


Abstract
We discuss the purpose of neural-symbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neural-symbolic integration, position the model in the broader context of multi-agent systems, machine learning and automated reasoning, and list some of the challenges for the area of neural-symbolic computation to achieve the promise of effective integration of robust learning and expressive reasoning under uncertainty.

Cite as

Artur S. d'Avila Garcez. Neurons and Symbols: A Manifesto. In Learning paradigms in dynamic environments. Dagstuhl Seminar Proceedings, Volume 10302, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{davilagarcez:DagSemProc.10302.3,
  author =	{d'Avila Garcez, Artur S.},
  title =	{{Neurons and Symbols: A Manifesto}},
  booktitle =	{Learning paradigms in dynamic environments},
  pages =	{1--16},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10302},
  editor =	{Barbara Hammer and Pascal Hitzler and Wolfgang Maass and Marc Toussaint},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.10302.3},
  URN =		{urn:nbn:de:0030-drops-28005},
  doi =		{10.4230/DagSemProc.10302.3},
  annote =	{Keywords: Neuro-symbolic systems, cognitive models, machine learning}
}
Document
One-shot Learning of Poisson Distributions in fast changing environments

Authors: Peter Tino

Published in: Dagstuhl Seminar Proceedings, Volume 10302, Learning paradigms in dynamic environments (2010)


Abstract
In Bioinformatics, Audic and Claverie were among the first to systematically study the influence of random fluctuations and sampling size on the reliability of digital expression profile data. For a transcript representing a small fraction of the library and a large number N of clones, the probability of observing x tags of the same gene will be well-approximated by the Poisson distribution parametrised by its mean (and variance) m>0, where the unknown parameter m signifies the number of transcripts of the given type (tag) per N clones in the cDNA library. On an abstract level, to determine whether a gene is differentially expressed or not, one has two numbers generated from two distinct Poisson distributions and based on this (extremely sparse) sample one has to decide whether the two Poisson distributions are identical or not. This can be used e.g. to determine equivalence of Poisson photon sources (up to time shift) in gravitational lensing. Each Poisson distribution is represented by a single measurement only, which is, of course, from a purely statistical standpoint very problematic. The key instrument of the Audic-Claverie approach is a distribution P over tag counts y in one library informed by the tag count x in the other library, under the null hypothesis that the tag counts are generated from the same but unknown Poisson distribution. P is obtained by Bayesian averaging (infinite mixture) of all possible Poisson distributions with mixing proportions equal to the posteriors (given x) under the flat prior over m. We ask: Given that the tag count samples from SAGE libraries are *extremely* limited, how useful actually is the Audic-Claverie methodology? We rigorously analyse the A-C statistic P that forms a backbone of the methodology and represents our knowledge of the underlying tag generating process based on one observation. We show will that the A-C statistic P and the underlying Poisson distribution of the tag counts share the same mode structure. Moreover, the K-L divergence from the true unknown Poisson distribution to the A-C statistic is minimised when the A-C statistic is conditioned on the mode of the Poisson distribution. Most importantly (and perhaps rather surprisingly), the expectation of this K-L divergence never exceeds 1/2 bit! This constitutes a rigorous quantitative argument, extending the previous empirical Monte Carlo studies, that supports the wide spread use of Audic-Claverie method, even though by their very nature, the SAGE libraries represent very sparse samples.

Cite as

Peter Tino. One-shot Learning of Poisson Distributions in fast changing environments. In Learning paradigms in dynamic environments. Dagstuhl Seminar Proceedings, Volume 10302, pp. 1-9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{tino:DagSemProc.10302.4,
  author =	{Tino, Peter},
  title =	{{One-shot Learning of Poisson Distributions in fast changing environments}},
  booktitle =	{Learning paradigms in dynamic environments},
  pages =	{1--9},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10302},
  editor =	{Barbara Hammer and Pascal Hitzler and Wolfgang Maass and Marc Toussaint},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.10302.4},
  URN =		{urn:nbn:de:0030-drops-27998},
  doi =		{10.4230/DagSemProc.10302.4},
  annote =	{Keywords: Audic-Claverie statistic, Bayesian averaging, information theory, one-shot learning, Poisson distribution}
}
Document
Some steps towards a general principle for dimensionality reduction mappings

Authors: Barbara Hammer, Kerstin Bunte, and Michael Biehl

Published in: Dagstuhl Seminar Proceedings, Volume 10302, Learning paradigms in dynamic environments (2010)


Abstract
In the past years, many dimensionality reduction methods have been established which allow to visualize high dimensional data sets. Recently, also formal evaluation schemes have been proposed for data visualization, which allow a quantitative evaluation along general principles. Most techniques provide a mapping of a priorly given finite set of points only, requiring additional steps for out-of-sample extensions. We propose a general view on dimensionality reduction based on the concept of cost functions, and, based on this general principle, extend dimensionality reduction to explicit mappings of the data manifold. This offers the possibility of simple out-of-sample extensions. Further, it opens a way towards a theory of data visualization taking the perspective of its generalization ability to new data points. We demonstrate the approach based in a simple example.

Cite as

Barbara Hammer, Kerstin Bunte, and Michael Biehl. Some steps towards a general principle for dimensionality reduction mappings. In Learning paradigms in dynamic environments. Dagstuhl Seminar Proceedings, Volume 10302, pp. 1-15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{hammer_et_al:DagSemProc.10302.5,
  author =	{Hammer, Barbara and Bunte, Kerstin and Biehl, Michael},
  title =	{{Some steps towards a general principle for dimensionality reduction mappings}},
  booktitle =	{Learning paradigms in dynamic environments},
  pages =	{1--15},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10302},
  editor =	{Barbara Hammer and Pascal Hitzler and Wolfgang Maass and Marc Toussaint},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.10302.5},
  URN =		{urn:nbn:de:0030-drops-28034},
  doi =		{10.4230/DagSemProc.10302.5},
  annote =	{Keywords: Visualization, dimensionality reduction}
}
Document
Why deterministic logic is hard to learn but Statistical Relational Learning works

Authors: Marc Toussaint

Published in: Dagstuhl Seminar Proceedings, Volume 10302, Learning paradigms in dynamic environments (2010)


Abstract
A brief note on why we think that the statistical relational learning framework is a great advancement over deterministic logic – in particular in the context of model-based Reinforcement Learning.

Cite as

Marc Toussaint. Why deterministic logic is hard to learn but Statistical Relational Learning works. In Learning paradigms in dynamic environments. Dagstuhl Seminar Proceedings, Volume 10302, pp. 1-2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{toussaint:DagSemProc.10302.6,
  author =	{Toussaint, Marc},
  title =	{{Why deterministic logic is hard to learn but Statistical Relational Learning works}},
  booktitle =	{Learning paradigms in dynamic environments},
  pages =	{1--2},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10302},
  editor =	{Barbara Hammer and Pascal Hitzler and Wolfgang Maass and Marc Toussaint},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.10302.6},
  URN =		{urn:nbn:de:0030-drops-28014},
  doi =		{10.4230/DagSemProc.10302.6},
  annote =	{Keywords: Statistical relational learning, relational model-based Reinforcement Learning}
}
  • Refine by Author
  • 3 Hammer, Barbara
  • 3 Toussaint, Marc
  • 2 Hitzler, Pascal
  • 2 Maass, Wolfgang
  • 1 Biehl, Michael
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 1 Audic-Claverie statistic
  • 1 Autonomous learning
  • 1 Bayesian averaging
  • 1 Dynamic systems
  • 1 Neural-symbolic integration
  • Show More...

  • Refine by Type
  • 6 document

  • Refine by Publication Year
  • 6 2010

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail