17 Search Results for "Hitzler, Pascal"


Document
Neural-Symbolic Learning and Reasoning (Dagstuhl Seminar 14381)

Authors: Artur d'Avila Garcez, Marco Gori, Pascal Hitzler, and Luís C. Lamb

Published in: Dagstuhl Reports, Volume 4, Issue 9 (2015)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 14381 "Neural-Symbolic Learning and Reasoning", which was held from September 14th to 19th, 2014. This seminar brought together specialist in machine learning, knowledge representation and reasoning, computer vision and image understanding, natural language processing, and cognitive science. The aim of the seminar was to explore the interface among several fields that contribute to the effective integration of cognitive abilities such as learning, reasoning, vision and language understanding in intelligent and cognitive computational systems. The seminar consisted of contributed and invited talks, breakout and joint group discussion sessions.

Cite as

Artur d'Avila Garcez, Marco Gori, Pascal Hitzler, and Luís C. Lamb. Neural-Symbolic Learning and Reasoning (Dagstuhl Seminar 14381). In Dagstuhl Reports, Volume 4, Issue 9, pp. 50-84, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@Article{davilagarcez_et_al:DagRep.4.9.50,
  author =	{d'Avila Garcez, Artur and Gori, Marco and Hitzler, Pascal and Lamb, Lu{\'\i}s C.},
  title =	{{Neural-Symbolic Learning and Reasoning (Dagstuhl Seminar 14381)}},
  pages =	{50--84},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2015},
  volume =	{4},
  number =	{9},
  editor =	{d'Avila Garcez, Artur and Gori, Marco and Hitzler, Pascal and Lamb, Lu{\'\i}s C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.4.9.50},
  URN =		{urn:nbn:de:0030-drops-48843},
  doi =		{10.4230/DagRep.4.9.50},
  annote =	{Keywords: Neural-symbolic computation, deep learning, image understanding, lifelong machine learning, natural language understanding, ontology learning}
}
Document
Cognitive Approaches for the Semantic Web (Dagstuhl Seminar 12221)

Authors: Dedre Gentner, Frank van Harmelen, Pascal Hitzler, Krzysztof Janowicz, and Kai-Uwe Kühnberger

Published in: Dagstuhl Reports, Volume 2, Issue 5 (2012)


Abstract
A major focus in the design of Semantic Web ontology languages used to be on finding a suitable balance between the expressivity of the language and the tractability of reasoning services defined over this language. This focus mirrors the original vision of a Web composed of machine readable and understandable data. Similarly to the classical Web a few years ago, the attention is recently shifting towards a user-centric vision of the Semantic Web. Essentially, the information stored on the Web is from and for humans. This new focus is not only reflected in the fast growing Linked Data Web but also in the increasing influence of research from cognitive science, human computer interaction, and machine-learning. Cognitive aspects emerge as an essential ingredient for future work on knowledge acquisition, representation, reasoning, and interactions on the Semantic Web. Visual interfaces have to support semantic-based retrieval and at the same time hide the complexity of the underlying reasoning machinery from the user. Analogical and similarity-based reasoning should assist users in browsing and navigating through the rapidly increasing amount of information. Instead of pre-defined conceptualizations of the world, the selection and conceptualization of relevant information has to be tailored to the user's context on-the-fly. This involves work on ontology modularization and context-awareness, but also approaches from ecological psychology such as affordance theory which also plays an increasing role in robotics and AI. During the Dagstuhl Seminar 12221 we discussed the most promising ways to move forward on the vision of bringing findings from cognitive science to the Semantic Web, and to create synergies between the different areas of research. While the seminar focused on the use of cognitive engineering for a user-centric Semantic Web, it also discussed the reverse direction, i.e., how can the Semantic Web work on knowledge representation and reasoning feed back to the cognitive science community.

Cite as

Dedre Gentner, Frank van Harmelen, Pascal Hitzler, Krzysztof Janowicz, and Kai-Uwe Kühnberger. Cognitive Approaches for the Semantic Web (Dagstuhl Seminar 12221). In Dagstuhl Reports, Volume 2, Issue 5, pp. 93-116, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@Article{gentner_et_al:DagRep.2.5.93,
  author =	{Gentner, Dedre and van Harmelen, Frank and Hitzler, Pascal and Janowicz, Krzysztof and K\"{u}hnberger, Kai-Uwe},
  title =	{{Cognitive Approaches for the Semantic Web (Dagstuhl Seminar 12221)}},
  pages =	{93--116},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2012},
  volume =	{2},
  number =	{5},
  editor =	{Gentner, Dedre and van Harmelen, Frank and Hitzler, Pascal and Janowicz, Krzysztof and K\"{u}hnberger, Kai-Uwe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.2.5.93},
  URN =		{urn:nbn:de:0030-drops-37115},
  doi =		{10.4230/DagRep.2.5.93},
  annote =	{Keywords: Cognitive methods, Semantic Web, Analogy and similarity-based reasoning, Semantic heterogeneity and context, Symbol grounding, Emerging semantics, Comonsense reasoning}
}
Document
10302 Abstracts Collection – Learning paradigms in dynamic environments

Authors: Barbara Hammer, Pascal Hitzler, Wolfgang Maass, and Marc Toussaint

Published in: Dagstuhl Seminar Proceedings, Volume 10302, Learning paradigms in dynamic environments (2010)


Abstract
From 25.07. to 30.07.2010, the Dagstuhl Seminar 10302 ``Learning paradigms in dynamic environments '' was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Barbara Hammer, Pascal Hitzler, Wolfgang Maass, and Marc Toussaint. 10302 Abstracts Collection – Learning paradigms in dynamic environments. In Learning paradigms in dynamic environments. Dagstuhl Seminar Proceedings, Volume 10302, pp. 1-15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{hammer_et_al:DagSemProc.10302.1,
  author =	{Hammer, Barbara and Hitzler, Pascal and Maass, Wolfgang and Toussaint, Marc},
  title =	{{10302 Abstracts Collection – Learning paradigms in dynamic environments}},
  booktitle =	{Learning paradigms in dynamic environments},
  pages =	{1--15},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10302},
  editor =	{Barbara Hammer and Pascal Hitzler and Wolfgang Maass and Marc Toussaint},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.10302.1},
  URN =		{urn:nbn:de:0030-drops-28048},
  doi =		{10.4230/DagSemProc.10302.1},
  annote =	{Keywords: Recurrent neural networks, Dynamic systems, Speech processing, Neurobiology, Neural-symbolic integration, Autonomous learning}
}
Document
10302 Summary – Learning paradigms in dynamic environments

Authors: Barbara Hammer, Pascal Hitzler, Wolfgang Maass, and Marc Toussaint

Published in: Dagstuhl Seminar Proceedings, Volume 10302, Learning paradigms in dynamic environments (2010)


Abstract
The seminar centered around problems which arise in the context of machine learning in dynamic environments. Particular emphasis was put on a couple of specific questions in this context: how to represent and abstract knowledge appropriately to shape the problem of learning in a partially unknown and complex environment and how to combine statistical inference and abstract symbolic representations; how to infer from few data and how to deal with non i.i.d. data, model revision and life-long learning; how to come up with efficient strategies to control realistic environments for which exploration is costly, the dimensionality is high and data are sparse; how to deal with very large settings; and how to apply these models in challenging application areas such as robotics, computer vision, or the web.

Cite as

Barbara Hammer, Pascal Hitzler, Wolfgang Maass, and Marc Toussaint. 10302 Summary – Learning paradigms in dynamic environments. In Learning paradigms in dynamic environments. Dagstuhl Seminar Proceedings, Volume 10302, pp. 1-4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{hammer_et_al:DagSemProc.10302.2,
  author =	{Hammer, Barbara and Hitzler, Pascal and Maass, Wolfgang and Toussaint, Marc},
  title =	{{10302 Summary – Learning paradigms in dynamic environments}},
  booktitle =	{Learning paradigms in dynamic environments},
  pages =	{1--4},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10302},
  editor =	{Barbara Hammer and Pascal Hitzler and Wolfgang Maass and Marc Toussaint},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.10302.2},
  URN =		{urn:nbn:de:0030-drops-28027},
  doi =		{10.4230/DagSemProc.10302.2},
  annote =	{Keywords: Summary}
}
Document
Neurons and Symbols: A Manifesto

Authors: Artur S. d'Avila Garcez

Published in: Dagstuhl Seminar Proceedings, Volume 10302, Learning paradigms in dynamic environments (2010)


Abstract
We discuss the purpose of neural-symbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neural-symbolic integration, position the model in the broader context of multi-agent systems, machine learning and automated reasoning, and list some of the challenges for the area of neural-symbolic computation to achieve the promise of effective integration of robust learning and expressive reasoning under uncertainty.

Cite as

Artur S. d'Avila Garcez. Neurons and Symbols: A Manifesto. In Learning paradigms in dynamic environments. Dagstuhl Seminar Proceedings, Volume 10302, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{davilagarcez:DagSemProc.10302.3,
  author =	{d'Avila Garcez, Artur S.},
  title =	{{Neurons and Symbols: A Manifesto}},
  booktitle =	{Learning paradigms in dynamic environments},
  pages =	{1--16},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10302},
  editor =	{Barbara Hammer and Pascal Hitzler and Wolfgang Maass and Marc Toussaint},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.10302.3},
  URN =		{urn:nbn:de:0030-drops-28005},
  doi =		{10.4230/DagSemProc.10302.3},
  annote =	{Keywords: Neuro-symbolic systems, cognitive models, machine learning}
}
Document
One-shot Learning of Poisson Distributions in fast changing environments

Authors: Peter Tino

Published in: Dagstuhl Seminar Proceedings, Volume 10302, Learning paradigms in dynamic environments (2010)


Abstract
In Bioinformatics, Audic and Claverie were among the first to systematically study the influence of random fluctuations and sampling size on the reliability of digital expression profile data. For a transcript representing a small fraction of the library and a large number N of clones, the probability of observing x tags of the same gene will be well-approximated by the Poisson distribution parametrised by its mean (and variance) m>0, where the unknown parameter m signifies the number of transcripts of the given type (tag) per N clones in the cDNA library. On an abstract level, to determine whether a gene is differentially expressed or not, one has two numbers generated from two distinct Poisson distributions and based on this (extremely sparse) sample one has to decide whether the two Poisson distributions are identical or not. This can be used e.g. to determine equivalence of Poisson photon sources (up to time shift) in gravitational lensing. Each Poisson distribution is represented by a single measurement only, which is, of course, from a purely statistical standpoint very problematic. The key instrument of the Audic-Claverie approach is a distribution P over tag counts y in one library informed by the tag count x in the other library, under the null hypothesis that the tag counts are generated from the same but unknown Poisson distribution. P is obtained by Bayesian averaging (infinite mixture) of all possible Poisson distributions with mixing proportions equal to the posteriors (given x) under the flat prior over m. We ask: Given that the tag count samples from SAGE libraries are *extremely* limited, how useful actually is the Audic-Claverie methodology? We rigorously analyse the A-C statistic P that forms a backbone of the methodology and represents our knowledge of the underlying tag generating process based on one observation. We show will that the A-C statistic P and the underlying Poisson distribution of the tag counts share the same mode structure. Moreover, the K-L divergence from the true unknown Poisson distribution to the A-C statistic is minimised when the A-C statistic is conditioned on the mode of the Poisson distribution. Most importantly (and perhaps rather surprisingly), the expectation of this K-L divergence never exceeds 1/2 bit! This constitutes a rigorous quantitative argument, extending the previous empirical Monte Carlo studies, that supports the wide spread use of Audic-Claverie method, even though by their very nature, the SAGE libraries represent very sparse samples.

Cite as

Peter Tino. One-shot Learning of Poisson Distributions in fast changing environments. In Learning paradigms in dynamic environments. Dagstuhl Seminar Proceedings, Volume 10302, pp. 1-9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{tino:DagSemProc.10302.4,
  author =	{Tino, Peter},
  title =	{{One-shot Learning of Poisson Distributions in fast changing environments}},
  booktitle =	{Learning paradigms in dynamic environments},
  pages =	{1--9},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10302},
  editor =	{Barbara Hammer and Pascal Hitzler and Wolfgang Maass and Marc Toussaint},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.10302.4},
  URN =		{urn:nbn:de:0030-drops-27998},
  doi =		{10.4230/DagSemProc.10302.4},
  annote =	{Keywords: Audic-Claverie statistic, Bayesian averaging, information theory, one-shot learning, Poisson distribution}
}
Document
Some steps towards a general principle for dimensionality reduction mappings

Authors: Barbara Hammer, Kerstin Bunte, and Michael Biehl

Published in: Dagstuhl Seminar Proceedings, Volume 10302, Learning paradigms in dynamic environments (2010)


Abstract
In the past years, many dimensionality reduction methods have been established which allow to visualize high dimensional data sets. Recently, also formal evaluation schemes have been proposed for data visualization, which allow a quantitative evaluation along general principles. Most techniques provide a mapping of a priorly given finite set of points only, requiring additional steps for out-of-sample extensions. We propose a general view on dimensionality reduction based on the concept of cost functions, and, based on this general principle, extend dimensionality reduction to explicit mappings of the data manifold. This offers the possibility of simple out-of-sample extensions. Further, it opens a way towards a theory of data visualization taking the perspective of its generalization ability to new data points. We demonstrate the approach based in a simple example.

Cite as

Barbara Hammer, Kerstin Bunte, and Michael Biehl. Some steps towards a general principle for dimensionality reduction mappings. In Learning paradigms in dynamic environments. Dagstuhl Seminar Proceedings, Volume 10302, pp. 1-15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{hammer_et_al:DagSemProc.10302.5,
  author =	{Hammer, Barbara and Bunte, Kerstin and Biehl, Michael},
  title =	{{Some steps towards a general principle for dimensionality reduction mappings}},
  booktitle =	{Learning paradigms in dynamic environments},
  pages =	{1--15},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10302},
  editor =	{Barbara Hammer and Pascal Hitzler and Wolfgang Maass and Marc Toussaint},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.10302.5},
  URN =		{urn:nbn:de:0030-drops-28034},
  doi =		{10.4230/DagSemProc.10302.5},
  annote =	{Keywords: Visualization, dimensionality reduction}
}
Document
Why deterministic logic is hard to learn but Statistical Relational Learning works

Authors: Marc Toussaint

Published in: Dagstuhl Seminar Proceedings, Volume 10302, Learning paradigms in dynamic environments (2010)


Abstract
A brief note on why we think that the statistical relational learning framework is a great advancement over deterministic logic – in particular in the context of model-based Reinforcement Learning.

Cite as

Marc Toussaint. Why deterministic logic is hard to learn but Statistical Relational Learning works. In Learning paradigms in dynamic environments. Dagstuhl Seminar Proceedings, Volume 10302, pp. 1-2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{toussaint:DagSemProc.10302.6,
  author =	{Toussaint, Marc},
  title =	{{Why deterministic logic is hard to learn but Statistical Relational Learning works}},
  booktitle =	{Learning paradigms in dynamic environments},
  pages =	{1--2},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10302},
  editor =	{Barbara Hammer and Pascal Hitzler and Wolfgang Maass and Marc Toussaint},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.10302.6},
  URN =		{urn:nbn:de:0030-drops-28014},
  doi =		{10.4230/DagSemProc.10302.6},
  annote =	{Keywords: Statistical relational learning, relational model-based Reinforcement Learning}
}
Document
Approximate OWL Instance Retrieval with SCREECH

Authors: Pascal Hitzler, Markus Krötzsch, Sebastian Rudolph, and Tuvshintur Tserendorj

Published in: Dagstuhl Seminar Proceedings, Volume 8091, Logic and Probability for Scene Interpretation (2008)


Abstract
With the increasing interest in expressive ontologies for the Semantic Web, it is critical to develop scalable and efficient ontology reasoning techniques that can properly cope with very high data volumes. For certain application domains, approximate reasoning solutions, which trade soundness or completeness for increased reasoning speed, will help to deal with the high computational complexities which state of the art ontology reasoning tools have to face. In this paper, we present a comprehensive overview of the SCREECH approach to approximate instance retrieval with OWL ontologies, which is based on the KAON2 algorithms, facilitating a compilation of OWL DL TBoxes into Datalog, which is tractable in terms of data complexity. We present three different instantiations of the Screech approach, and report on experiments which show that the gain in efficiency outweighs the number of introduced mistakes in the reasoning process.

Cite as

Pascal Hitzler, Markus Krötzsch, Sebastian Rudolph, and Tuvshintur Tserendorj. Approximate OWL Instance Retrieval with SCREECH. In Logic and Probability for Scene Interpretation. Dagstuhl Seminar Proceedings, Volume 8091, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{hitzler_et_al:DagSemProc.08091.3,
  author =	{Hitzler, Pascal and Kr\"{o}tzsch, Markus and Rudolph, Sebastian and Tserendorj, Tuvshintur},
  title =	{{Approximate OWL Instance Retrieval with SCREECH}},
  booktitle =	{Logic and Probability for Scene Interpretation},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8091},
  editor =	{Anthony G. Cohn and David C. Hogg and Ralf M\"{o}ller and Bernd Neumann},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08091.3},
  URN =		{urn:nbn:de:0030-drops-16157},
  doi =		{10.4230/DagSemProc.08091.3},
  annote =	{Keywords: Description logics, automated reasoning, approximate reasoning, Horn logic}
}
Document
08041 Abstracts Collection – Recurrent Neural Networks - Models, Capacities, and Applications

Authors: Luc De Raedt, Barbara Hammer, Pascal Hitzler, and Wolfgang Maass

Published in: Dagstuhl Seminar Proceedings, Volume 8041, Recurrent Neural Networks- Models, Capacities, and Applications (2008)


Abstract
From January 20 to 25 2008, the Dagstuhl Seminar 08041 ``Recurrent Neural Networks- Models, Capacities, and Applications'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Luc De Raedt, Barbara Hammer, Pascal Hitzler, and Wolfgang Maass. 08041 Abstracts Collection – Recurrent Neural Networks - Models, Capacities, and Applications. In Recurrent Neural Networks- Models, Capacities, and Applications. Dagstuhl Seminar Proceedings, Volume 8041, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{deraedt_et_al:DagSemProc.08041.1,
  author =	{De Raedt, Luc and Hammer, Barbara and Hitzler, Pascal and Maass, Wolfgang},
  title =	{{08041 Abstracts Collection – Recurrent Neural Networks - Models, Capacities, and Applications}},
  booktitle =	{Recurrent Neural Networks- Models, Capacities, and Applications},
  pages =	{1--16},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8041},
  editor =	{Luc De Raedt and Barbara Hammer and Pascal Hitzler and Wolfgang Maass},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08041.1},
  URN =		{urn:nbn:de:0030-drops-14250},
  doi =		{10.4230/DagSemProc.08041.1},
  annote =	{Keywords: Recurrent Neural Networks, Neural-Symbolic Integration, Biological Models, Hybrid Models, Relational Learning Echo State Networks, Spike Prediction, Unsupervised Recurrent Networks}
}
Document
08041 Summary – Recurrent Neural Networks - Models, Capacities, and Applications

Authors: Luc De Raedt, Barbara Hammer, Pascal Hitzler, and Wolfgang Maass

Published in: Dagstuhl Seminar Proceedings, Volume 8041, Recurrent Neural Networks- Models, Capacities, and Applications (2008)


Abstract
The seminar centered around recurrent information processing in neural systems and its connections to brain sciences, on the one hand, and higher symbolic reasoning, on the other side. The goal was to explore connections across the disciplines and to tackle important questions which arise in all sub-disciplines such as representation of temporal information, generalization ability, inference, and learning.

Cite as

Luc De Raedt, Barbara Hammer, Pascal Hitzler, and Wolfgang Maass. 08041 Summary – Recurrent Neural Networks - Models, Capacities, and Applications. In Recurrent Neural Networks- Models, Capacities, and Applications. Dagstuhl Seminar Proceedings, Volume 8041, pp. 1-4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{deraedt_et_al:DagSemProc.08041.2,
  author =	{De Raedt, Luc and Hammer, Barbara and Hitzler, Pascal and Maass, Wolfgang},
  title =	{{08041 Summary – Recurrent Neural Networks - Models, Capacities, and Applications}},
  booktitle =	{Recurrent Neural Networks- Models, Capacities, and Applications},
  pages =	{1--4},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8041},
  editor =	{Luc De Raedt and Barbara Hammer and Pascal Hitzler and Wolfgang Maass},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08041.2},
  URN =		{urn:nbn:de:0030-drops-14243},
  doi =		{10.4230/DagSemProc.08041.2},
  annote =	{Keywords: Recurrent networks}
}
Document
Equilibria of Iterative Softmax and Critical Temperatures for Intermittent Search in Self-Organizing Neural Networks

Authors: Peter Tino

Published in: Dagstuhl Seminar Proceedings, Volume 8041, Recurrent Neural Networks- Models, Capacities, and Applications (2008)


Abstract
Optimization dynamics using self-organizing neural networks (SONN) driven by softmax weight renormalization has been shown to be capable of intermittent search for high-quality solutions in assignment optimization problems. However, the search is sensitive to temperature setting in the softmax renormalization step. The powerful search occurs only at the critical temperature that depends on the problem size. So far the critical temperatures have been determined only by tedious trial-and-error numerical simulations. We offer a rigorous analysis of the search performed by SONN and derive analytical approximations to the critical temperatures. We demonstrate on a set of N-queens problems for a wide range of problem sizes N that the analytically determined critical temperatures predict the optimal working temperatures for SONN intermittent search very well.

Cite as

Peter Tino. Equilibria of Iterative Softmax and Critical Temperatures for Intermittent Search in Self-Organizing Neural Networks. In Recurrent Neural Networks- Models, Capacities, and Applications. Dagstuhl Seminar Proceedings, Volume 8041, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{tino:DagSemProc.08041.3,
  author =	{Tino, Peter},
  title =	{{Equilibria of Iterative Softmax and Critical Temperatures for Intermittent Search in Self-Organizing Neural Networks}},
  booktitle =	{Recurrent Neural Networks- Models, Capacities, and Applications},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8041},
  editor =	{Luc De Raedt and Barbara Hammer and Pascal Hitzler and Wolfgang Maass},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08041.3},
  URN =		{urn:nbn:de:0030-drops-14202},
  doi =		{10.4230/DagSemProc.08041.3},
  annote =	{Keywords: Recurrent self-organizing maps, symmetry breaking bifurcation, N-queens}
}
Document
Perspectives of Neuro--Symbolic Integration – Extended Abstract --

Authors: Kai-Uwe Kühnberger, Helmar Gust, and Peter Geibel

Published in: Dagstuhl Seminar Proceedings, Volume 8041, Recurrent Neural Networks- Models, Capacities, and Applications (2008)


Abstract
There is an obvious tension between symbolic and subsymbolic theories, because both show complementary strengths and weaknesses in corresponding applications and underlying methodologies. The resulting gap in the foundations and the applicability of these approaches is theoretically unsatisfactory and practically undesirable. We sketch a theory that bridges this gap between symbolic and subsymbolic approaches by the introduction of a Topos-based semi-symbolic level used for coding logical first-order expressions in a homogeneous framework. This semi-symbolic level can be used for neural learning of logical first-order theories. Besides a presentation of the general idea of the framework, we sketch some challenges and important open problems for future research with respect to the presented approach and the field of neuro-symbolic integration, in general.

Cite as

Kai-Uwe Kühnberger, Helmar Gust, and Peter Geibel. Perspectives of Neuro--Symbolic Integration – Extended Abstract --. In Recurrent Neural Networks- Models, Capacities, and Applications. Dagstuhl Seminar Proceedings, Volume 8041, pp. 1-6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{kuhnberger_et_al:DagSemProc.08041.4,
  author =	{K\"{u}hnberger, Kai-Uwe and Gust, Helmar and Geibel, Peter},
  title =	{{Perspectives of Neuro--Symbolic Integration – Extended Abstract --}},
  booktitle =	{Recurrent Neural Networks- Models, Capacities, and Applications},
  pages =	{1--6},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8041},
  editor =	{Luc De Raedt and Barbara Hammer and Pascal Hitzler and Wolfgang Maass},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08041.4},
  URN =		{urn:nbn:de:0030-drops-14226},
  doi =		{10.4230/DagSemProc.08041.4},
  annote =	{Keywords: Neuro-Symbolic Integration, Topos Theory, First-Order Logic}
}
Document
The Grand Challenges and Myths of Neural-Symbolic Computation

Authors: Luis C. Lamb

Published in: Dagstuhl Seminar Proceedings, Volume 8041, Recurrent Neural Networks- Models, Capacities, and Applications (2008)


Abstract
The construction of computational cognitive models integrating the connectionist and symbolic paradigms of artificial intelligence is a standing research issue in the field. The combination of logic-based inference and connectionist learning systems may lead to the construction of semantically sound computational cognitive models in artificial intelligence, computer and cognitive sciences. Over the last decades, results regarding the computation and learning of classical reasoning within neural networks have been promising. Nonetheless, there still remains much do be done. Artificial intelligence, cognitive and computer science are strongly based on several non-classical reasoning formalisms, methodologies and logics. In knowledge representation, distributed systems, hardware design, theorem proving, systems specification and verification classical and non-classical logics have had a great impact on theory and real-world applications. Several challenges for neural-symbolic computation are pointed out, in particular for classical and non-classical computation in connectionist systems. We also analyse myths about neural-symbolic computation and shed new light on them considering recent research advances.

Cite as

Luis C. Lamb. The Grand Challenges and Myths of Neural-Symbolic Computation. In Recurrent Neural Networks- Models, Capacities, and Applications. Dagstuhl Seminar Proceedings, Volume 8041, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{lamb:DagSemProc.08041.5,
  author =	{Lamb, Luis C.},
  title =	{{The Grand Challenges  and Myths of Neural-Symbolic Computation}},
  booktitle =	{Recurrent Neural Networks- Models, Capacities, and Applications},
  pages =	{1--16},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8041},
  editor =	{Luc De Raedt and Barbara Hammer and Pascal Hitzler and Wolfgang Maass},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08041.5},
  URN =		{urn:nbn:de:0030-drops-14233},
  doi =		{10.4230/DagSemProc.08041.5},
  annote =	{Keywords: Connectionist non-classical logics, neural-symbolic computation, non-classical reasoning, computational cognitive models}
}
Document
The role of recurrent networks in neural architectures of grounded cognition: learning of control

Authors: Frank Van der Velde and Marc de Kamps

Published in: Dagstuhl Seminar Proceedings, Volume 8041, Recurrent Neural Networks- Models, Capacities, and Applications (2008)


Abstract
Recurrent networks have been used as neural models of language processing, with mixed results. Here, we discuss the role of recurrent networks in a neural architecture of grounded cognition. In particular, we discuss how the control of binding in this architecture can be learned. We trained a simple recurrent network (SRN) and a feedforward network (FFN) for this task. The results show that information from the architecture is needed as input for these networks to learn control of binding. Thus, both control systems are recurrent. We found that the recurrent system consisting of the architecture and an SRN or an FFN as a "core" can learn basic (but recursive) sentence structures. Problems with control of binding arise when the system with the SRN is tested on number of new sentence structures. In contrast, control of binding for these structures succeeds with the FFN. Yet, for some structures with (unlimited) embeddings, difficulties arise due to dynamical binding conflicts in the architecture itself. In closing, we discuss potential future developments of the architecture presented here.

Cite as

Frank Van der Velde and Marc de Kamps. The role of recurrent networks in neural architectures of grounded cognition: learning of control. In Recurrent Neural Networks- Models, Capacities, and Applications. Dagstuhl Seminar Proceedings, Volume 8041, pp. 1-18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{vandervelde_et_al:DagSemProc.08041.6,
  author =	{Van der Velde, Frank and de Kamps, Marc},
  title =	{{The role of recurrent networks in neural architectures of grounded cognition: learning of control}},
  booktitle =	{Recurrent Neural Networks- Models, Capacities, and Applications},
  pages =	{1--18},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8041},
  editor =	{Luc De Raedt and Barbara Hammer and Pascal Hitzler and Wolfgang Maass},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08041.6},
  URN =		{urn:nbn:de:0030-drops-14213},
  doi =		{10.4230/DagSemProc.08041.6},
  annote =	{Keywords: Grounded representations, binding control, combinatorial structures, neural architecture, recurrent network, learning}
}
  • Refine by Author
  • 9 Hitzler, Pascal
  • 5 Hammer, Barbara
  • 4 Maass, Wolfgang
  • 3 Toussaint, Marc
  • 2 De Raedt, Luc
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 1 Analogy and similarity-based reasoning
  • 1 Approximate Reasoning
  • 1 Audic-Claverie statistic
  • 1 Autonomous learning
  • 1 Bayesian averaging
  • Show More...

  • Refine by Type
  • 17 document

  • Refine by Publication Year
  • 7 2008
  • 6 2010
  • 2 2006
  • 1 2012
  • 1 2015

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail