20 Search Results for "Cohn, Anthony C."


Document
Short Paper
QualiNet: Acquiring Bird’s Eye View Qualitative Spatial Representation from 2D Images in Automated Vehicle Perception (Short Paper)

Authors: Nassim Belmecheri

Published in: LIPIcs, Volume 355, 32nd International Symposium on Temporal Representation and Reasoning (TIME 2025)


Abstract
We present QualiNet, an end-to-end deep learning framework that acquires Bird’s Eye View (BEV) qualitative spatial relations directly from 2D images, eliminating the need for depth sensors. The system combines 2D object detection, masking, and classification to infer Rectangle Algebra (RA) and Qualitative Distance Calculus (QDC) relations. Evaluated on NuScenes and PandaSet datasets, QualiNet achieves 91% accuracy for RA, 80% for QDC, and 99% top-2 accuracy, demonstrating robust performance for automated vehicle perception.

Cite as

Nassim Belmecheri. QualiNet: Acquiring Bird’s Eye View Qualitative Spatial Representation from 2D Images in Automated Vehicle Perception (Short Paper). In 32nd International Symposium on Temporal Representation and Reasoning (TIME 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 355, pp. 14:1-14:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{belmecheri:LIPIcs.TIME.2025.14,
  author =	{Belmecheri, Nassim},
  title =	{{QualiNet: Acquiring Bird’s Eye View Qualitative Spatial Representation from 2D Images in Automated Vehicle Perception}},
  booktitle =	{32nd International Symposium on Temporal Representation and Reasoning (TIME 2025)},
  pages =	{14:1--14:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-401-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{355},
  editor =	{Vidal, Thierry and Wa{\l}\k{e}ga, Przemys{\l}aw Andrzej},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TIME.2025.14},
  URN =		{urn:nbn:de:0030-drops-244608},
  doi =		{10.4230/LIPIcs.TIME.2025.14},
  annote =	{Keywords: Qualitative Spatial Representation, Deep Learning, Computer vision, Qualitative Scene Understanding, Spatio-temporal representation and reasoning models (including moving objects tracking)}
}
Document
Assessing Map Reproducibility with Visual Question-Answering: An Empirical Evaluation

Authors: Eftychia Koukouraki, Auriol Degbelo, and Christian Kray

Published in: LIPIcs, Volume 346, 13th International Conference on Geographic Information Science (GIScience 2025)


Abstract
Reproducibility is a key principle of the modern scientific method. Maps, as an important means of communicating scientific results in GIScience and across disciplines, should be reproducible. Currently, map reproducibility assessment is done manually, which makes the assessment process tedious and time-consuming, ultimately limiting its efficiency. Hence, this work explores the extent to which Visual Question-Answering (VQA) can be used to automate some tasks relevant to map reproducibility assessment. We selected five state-of-the-art vision language models (VLMs) and followed a three-step approach to evaluate their ability to discriminate between maps and other images, interpret map content, and compare two map images using VQA. Our results show that current VLMs already possess map-reading capabilities and demonstrate understanding of spatial concepts, such as cardinal directions, geographic scope, and legend interpretation. Our paper demonstrates the potential of using VQA to support reproducibility assessment and highlights the outstanding issues that need to be addressed to achieve accurate, trustworthy map descriptions, thereby reducing the time and effort required by human evaluators.

Cite as

Eftychia Koukouraki, Auriol Degbelo, and Christian Kray. Assessing Map Reproducibility with Visual Question-Answering: An Empirical Evaluation. In 13th International Conference on Geographic Information Science (GIScience 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 346, pp. 13:1-13:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{koukouraki_et_al:LIPIcs.GIScience.2025.13,
  author =	{Koukouraki, Eftychia and Degbelo, Auriol and Kray, Christian},
  title =	{{Assessing Map Reproducibility with Visual Question-Answering: An Empirical Evaluation}},
  booktitle =	{13th International Conference on Geographic Information Science (GIScience 2025)},
  pages =	{13:1--13:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-378-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{346},
  editor =	{Sila-Nowicka, Katarzyna and Moore, Antoni and O'Sullivan, David and Adams, Benjamin and Gahegan, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GIScience.2025.13},
  URN =		{urn:nbn:de:0030-drops-238426},
  doi =		{10.4230/LIPIcs.GIScience.2025.13},
  annote =	{Keywords: map comparison, computational reproducibility, visual question answering, large language models, GeoAI}
}
Document
Research
CoaKG: A Contextualized Knowledge Graph Approach for Exploratory Search and Decision Making

Authors: Veronica dos Santos, Daniel Schwabe, Altigran Soares da Silva, and Sérgio Lifschitz

Published in: TGDK, Volume 3, Issue 1 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 1


Abstract
In decision-making scenarios, an information need arises due to a knowledge gap when a decision-maker needs more knowledge to make a decision. Users may take the initiative to acquire knowledge to fill this gap through exploratory search approaches using Knowledge Graphs (KGs) as information sources, but their queries can be incomplete, inaccurate, and ambiguous. Although KGs have great potential for exploratory search, they are incomplete by nature. Besides, for both Crowd-sourced KGs and KGs constructed by integrating several different information sources of varying quality to be effectively consumed, there is a need for a Trust Layer. Our research aims to enrich and allow querying KGs to support context-aware exploration in decision-making scenarios. We propose a layered architecture for Context Augmented Knowledge Graphs-based Decision Support Systems with a Knowledge Layer that operates under a Dual Open World Assumption (DOWA). Under DOWA, the evaluation of the truthfulness of the information obtained from KGs depends on the context of its claims and the tasks carried out or intended (purpose). The Knowledge Layer comprises a Context Augmented KG (CoaKG) and a CoaKG Query Engine. The CoaKG contains contextual mappings to identify explicit context and rules to infer implicit context. The CoaKG Query Engine is designed as a query-answering approach that retrieves all contextualized answers from the CoaKG. A Proof of Concept (PoC) based on Wikidata was developed to evaluate the effectiveness of the Knowledge Layer.

Cite as

Veronica dos Santos, Daniel Schwabe, Altigran Soares da Silva, and Sérgio Lifschitz. CoaKG: A Contextualized Knowledge Graph Approach for Exploratory Search and Decision Making. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 1, pp. 4:1-4:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{dossantos_et_al:TGDK.3.1.4,
  author =	{dos Santos, Veronica and Schwabe, Daniel and da Silva, Altigran Soares and Lifschitz, S\'{e}rgio},
  title =	{{CoaKG: A Contextualized Knowledge Graph Approach for Exploratory Search and Decision Making}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{4:1--4:27},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.1.4},
  URN =		{urn:nbn:de:0030-drops-236685},
  doi =		{10.4230/TGDK.3.1.4},
  annote =	{Keywords: Knowledge Graphs, Context Search, Decision Support}
}
Document
Survey
How Does Knowledge Evolve in Open Knowledge Graphs?

Authors: Axel Polleres, Romana Pernisch, Angela Bonifati, Daniele Dell'Aglio, Daniil Dobriy, Stefania Dumbrava, Lorena Etcheverry, Nicolas Ferranti, Katja Hose, Ernesto Jiménez-Ruiz, Matteo Lissandrini, Ansgar Scherp, Riccardo Tommasini, and Johannes Wachs

Published in: TGDK, Volume 1, Issue 1 (2023): Special Issue on Trends in Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 1, Issue 1


Abstract
Openly available, collaboratively edited Knowledge Graphs (KGs) are key platforms for the collective management of evolving knowledge. The present work aims t o provide an analysis of the obstacles related to investigating and processing specifically this central aspect of evolution in KGs. To this end, we discuss (i) the dimensions of evolution in KGs, (ii) the observability of evolution in existing, open, collaboratively constructed Knowledge Graphs over time, and (iii) possible metrics to analyse this evolution. We provide an overview of relevant state-of-the-art research, ranging from metrics developed for Knowledge Graphs specifically to potential methods from related fields such as network science. Additionally, we discuss technical approaches - and their current limitations - related to storing, analysing and processing large and evolving KGs in terms of handling typical KG downstream tasks.

Cite as

Axel Polleres, Romana Pernisch, Angela Bonifati, Daniele Dell'Aglio, Daniil Dobriy, Stefania Dumbrava, Lorena Etcheverry, Nicolas Ferranti, Katja Hose, Ernesto Jiménez-Ruiz, Matteo Lissandrini, Ansgar Scherp, Riccardo Tommasini, and Johannes Wachs. How Does Knowledge Evolve in Open Knowledge Graphs?. In Special Issue on Trends in Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 1, Issue 1, pp. 11:1-11:59, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{polleres_et_al:TGDK.1.1.11,
  author =	{Polleres, Axel and Pernisch, Romana and Bonifati, Angela and Dell'Aglio, Daniele and Dobriy, Daniil and Dumbrava, Stefania and Etcheverry, Lorena and Ferranti, Nicolas and Hose, Katja and Jim\'{e}nez-Ruiz, Ernesto and Lissandrini, Matteo and Scherp, Ansgar and Tommasini, Riccardo and Wachs, Johannes},
  title =	{{How Does Knowledge Evolve in Open Knowledge Graphs?}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{11:1--11:59},
  year =	{2023},
  volume =	{1},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.1.1.11},
  URN =		{urn:nbn:de:0030-drops-194855},
  doi =		{10.4230/TGDK.1.1.11},
  annote =	{Keywords: KG evolution, temporal KG, versioned KG, dynamic KG}
}
Document
08091 Abstracts Collection – Logic and Probability for Scene Interpretation

Authors: Bernd Neumann, Anthony C. Cohn, David C. Hogg, and Ralf Möller

Published in: Dagstuhl Seminar Proceedings, Volume 8091, Logic and Probability for Scene Interpretation (2008)


Abstract
From 25.2.2008 to Friday 29.2.2008, the Dagstuhl Seminar 08091 ``Logic and Probability for Scene Interpretation'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper.

Cite as

Bernd Neumann, Anthony C. Cohn, David C. Hogg, and Ralf Möller. 08091 Abstracts Collection – Logic and Probability for Scene Interpretation. In Logic and Probability for Scene Interpretation. Dagstuhl Seminar Proceedings, Volume 8091, pp. 1-17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{neumann_et_al:DagSemProc.08091.1,
  author =	{Neumann, Bernd and Cohn, Anthony C. and Hogg, David C. and M\"{o}ller, Ralf},
  title =	{{08091 Abstracts Collection – Logic and Probability for Scene Interpretation}},
  booktitle =	{Logic and Probability for Scene Interpretation},
  pages =	{1--17},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8091},
  editor =	{Anthony G. Cohn and David C. Hogg and Ralf M\"{o}ller and Bernd Neumann},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08091.1},
  URN =		{urn:nbn:de:0030-drops-16480},
  doi =		{10.4230/DagSemProc.08091.1},
  annote =	{Keywords: Logic, probabilities, scene interpretation}
}
Document
Architectural and Representational Requirements for Seeing Processes, Proto-affordances and Affordances

Authors: Aaron Sloman

Published in: Dagstuhl Seminar Proceedings, Volume 8091, Logic and Probability for Scene Interpretation (2008)


Abstract
This paper, combining the standpoints of philosophy and Artificial Intelligence with theoretical psychology, summarises several decades of investigation by the author of the variety of functions of vision in humans and other animals, pointing out that biological evolution has solved many more problems than are normally noticed. For example, the biological functions of human and animal vision are closely related to the ability of humans to do mathematics, including discovering and proving theorems in geometry, topology and arithmetic. Many of the phenomena discovered by psychologists and neuroscientists require sophisticated controlled laboratory settings and specialised measuring equipment, whereas the functions of vision reported here mostly require only careful attention to a wide range of everyday competences that easily go unnoticed. Currently available computer models and neural theories are very far from explaining those functions, so progress in explaining how vision works is more in need of new proposals for explanatory mechanisms than new laboratory data. Systematically formulating the requirements for such mechanisms is not easy. If we start by analysing familiar competences, that can suggest new experiments to clarify precise forms of these competences, how they develop within individuals, which other species have them, and how performance varies according to conditions. This will help to constrain requirements for models purporting to explain how the competences work. For example, Gibson’s theory of affordances needs a number of extensions, including allowing affordances to be composed in several ways from lower level proto-affordances. The paper ends with speculations regarding the need for new kinds of information-processing machinery to account for the phenomena.

Cite as

Aaron Sloman. Architectural and Representational Requirements for Seeing Processes, Proto-affordances and Affordances. In Logic and Probability for Scene Interpretation. Dagstuhl Seminar Proceedings, Volume 8091, pp. 1-57, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{sloman:DagSemProc.08091.4,
  author =	{Sloman, Aaron},
  title =	{{Architectural and Representational Requirements for Seeing Processes, Proto-affordances and Affordances}},
  booktitle =	{Logic and Probability for Scene Interpretation},
  pages =	{1--57},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8091},
  editor =	{Anthony G. Cohn and David C. Hogg and Ralf M\"{o}ller and Bernd Neumann},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08091.4},
  URN =		{urn:nbn:de:0030-drops-16569},
  doi =		{10.4230/DagSemProc.08091.4},
  annote =	{Keywords: Vision, affordances, architectures, development, design space}
}
Document
Abstraction, ontology and task-guidance for visual perception in robots

Authors: Matthias Schlemmer and Markus Vincze

Published in: Dagstuhl Seminar Proceedings, Volume 8091, Logic and Probability for Scene Interpretation (2008)


Abstract
For solving recognition tasks in order to navigate in unknown environments and to manipulate objects, humans seem to use at least the following crucial capabilities: abstraction (for storing higher-level concepts of things), common sense knowledge and prediction. Whereas the first and second provide the basis for situated recognition, the second and third serve for pruning the search space as it helps anticipating what (in an abstract sense) they will see next and where. The main goal of our current research is, how we could use such a kind of "common sense world knowledge" for guiding visual perception and understanding scenes. Therefore, we are combining an owl-ontology with the output of vision tools. The additional use of abstraction techniques tries to establish the possibility of detecting higher level concepts, such as arches composed of a variable number of parts. The goal is to finally find concepts such as doors and tables in arbitrary scenes in order to arrive at a generic recognition tool for home robots. The ontology should additionally provide task-specific information about the things to detect.

Cite as

Matthias Schlemmer and Markus Vincze. Abstraction, ontology and task-guidance for visual perception in robots. In Logic and Probability for Scene Interpretation. Dagstuhl Seminar Proceedings, Volume 8091, pp. 1-12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{schlemmer_et_al:DagSemProc.08091.2,
  author =	{Schlemmer, Matthias and Vincze, Markus},
  title =	{{Abstraction, ontology and task-guidance for visual perception in robots}},
  booktitle =	{Logic and Probability for Scene Interpretation},
  pages =	{1--12},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8091},
  editor =	{Anthony G. Cohn and David C. Hogg and Ralf M\"{o}ller and Bernd Neumann},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08091.2},
  URN =		{urn:nbn:de:0030-drops-16081},
  doi =		{10.4230/DagSemProc.08091.2},
  annote =	{Keywords: Abstraction, ontology, task, vision}
}
Document
Approximate OWL Instance Retrieval with SCREECH

Authors: Pascal Hitzler, Markus Krötzsch, Sebastian Rudolph, and Tuvshintur Tserendorj

Published in: Dagstuhl Seminar Proceedings, Volume 8091, Logic and Probability for Scene Interpretation (2008)


Abstract
With the increasing interest in expressive ontologies for the Semantic Web, it is critical to develop scalable and efficient ontology reasoning techniques that can properly cope with very high data volumes. For certain application domains, approximate reasoning solutions, which trade soundness or completeness for increased reasoning speed, will help to deal with the high computational complexities which state of the art ontology reasoning tools have to face. In this paper, we present a comprehensive overview of the SCREECH approach to approximate instance retrieval with OWL ontologies, which is based on the KAON2 algorithms, facilitating a compilation of OWL DL TBoxes into Datalog, which is tractable in terms of data complexity. We present three different instantiations of the Screech approach, and report on experiments which show that the gain in efficiency outweighs the number of introduced mistakes in the reasoning process.

Cite as

Pascal Hitzler, Markus Krötzsch, Sebastian Rudolph, and Tuvshintur Tserendorj. Approximate OWL Instance Retrieval with SCREECH. In Logic and Probability for Scene Interpretation. Dagstuhl Seminar Proceedings, Volume 8091, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{hitzler_et_al:DagSemProc.08091.3,
  author =	{Hitzler, Pascal and Kr\"{o}tzsch, Markus and Rudolph, Sebastian and Tserendorj, Tuvshintur},
  title =	{{Approximate OWL Instance Retrieval with SCREECH}},
  booktitle =	{Logic and Probability for Scene Interpretation},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8091},
  editor =	{Anthony G. Cohn and David C. Hogg and Ralf M\"{o}ller and Bernd Neumann},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08091.3},
  URN =		{urn:nbn:de:0030-drops-16157},
  doi =		{10.4230/DagSemProc.08091.3},
  annote =	{Keywords: Description logics, automated reasoning, approximate reasoning, Horn logic}
}
Document
Assimilating knowledge from neuroimages in schizophrenia diagnostics

Authors: Paulo Santos, Carlos Thomaz, Luiz Celiberto, Fabio Duran, Wagner Gattaz, and Geraldo Busatto

Published in: Dagstuhl Seminar Proceedings, Volume 8091, Logic and Probability for Scene Interpretation (2008)


Abstract
The aim of this article is to propose an integrated framework for classifying and describing patterns of disorders from medical images using a combination of image registration, linear discriminant analysis and region-based ontologies. In a first stage of this endeavour we are going to study and evaluate multivariate statistical methodologies to identify the most discriminating hyperplane separating two populations contained in the input data. This step has, as its major goal, the analysis of all the data simultaneously rather than feature by feature. The second stage of this work includes the development of an ontology whose aim is the assimilation and exploration of the knowledge contained in the results of the previous statistical methods. Automated knowledge discovery from images is the key motivation for the methods to be investigated in this research. We argue that such investigation provides a suitable framework for characterising the high complexity of MR images in schizophrenia.

Cite as

Paulo Santos, Carlos Thomaz, Luiz Celiberto, Fabio Duran, Wagner Gattaz, and Geraldo Busatto. Assimilating knowledge from neuroimages in schizophrenia diagnostics. In Logic and Probability for Scene Interpretation. Dagstuhl Seminar Proceedings, Volume 8091, pp. 1-25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{santos_et_al:DagSemProc.08091.5,
  author =	{Santos, Paulo and Thomaz, Carlos and Celiberto, Luiz and Duran, Fabio and Gattaz, Wagner and Busatto, Geraldo},
  title =	{{Assimilating knowledge from neuroimages in schizophrenia diagnostics}},
  booktitle =	{Logic and Probability for Scene Interpretation},
  pages =	{1--25},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8091},
  editor =	{Anthony G. Cohn and David C. Hogg and Ralf M\"{o}ller and Bernd Neumann},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08091.5},
  URN =		{urn:nbn:de:0030-drops-16078},
  doi =		{10.4230/DagSemProc.08091.5},
  annote =	{Keywords: Statistical classification, spatial ontologies}
}
Document
Bayesian Compositional Hierarchies - A Probabilistic Structure for Scene Interpretation

Authors: Bernd Neumann

Published in: Dagstuhl Seminar Proceedings, Volume 8091, Logic and Probability for Scene Interpretation (2008)


Abstract
In high-level vision, it is often useful to organize conceptual models in compositional hierarchies. For example, models of building facades (which are used here as examples) can be described in terms of constituent parts such as balconies or window arrays which in turn may be further decomposed. While compositional hierarchies are widely used in scene interpretation, it is not clear how to model and exploit probabilistic dependencies which may exist within and between aggregates. In this contribution I present Bayesian Aggregate Hierarchies as a means to capture probabilistic dependencies in a compositional hierarchy. The formalism integrates well with object-centered representations and extends Bayesian Networks by allowing arbitrary probabilistic dependencies within aggregates. To obtain efficient inference procedures, the aggregate structure must possess abstraction properties which ensure that internal aggregate properties are only affected in accordance with the hierarchical structure. Using examples from the building domain, it is shown that probabilistic aggregate information can thus be integrated into a logic-based scene interpretation system and provide a preference measure for interpretation steps.

Cite as

Bernd Neumann. Bayesian Compositional Hierarchies - A Probabilistic Structure for Scene Interpretation. In Logic and Probability for Scene Interpretation. Dagstuhl Seminar Proceedings, Volume 8091, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{neumann:DagSemProc.08091.6,
  author =	{Neumann, Bernd},
  title =	{{Bayesian Compositional Hierarchies - A Probabilistic Structure for Scene Interpretation}},
  booktitle =	{Logic and Probability for Scene Interpretation},
  pages =	{1--16},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8091},
  editor =	{Anthony G. Cohn and David C. Hogg and Ralf M\"{o}ller and Bernd Neumann},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08091.6},
  URN =		{urn:nbn:de:0030-drops-16050},
  doi =		{10.4230/DagSemProc.08091.6},
  annote =	{Keywords: Scene interpretation, compositional hierarchy, probabilistic inference}
}
Document
Combining Logic and Probability in Tracking and Scene Interpretation

Authors: Brandon Bennett

Published in: Dagstuhl Seminar Proceedings, Volume 8091, Logic and Probability for Scene Interpretation (2008)


Abstract
The paper gives a high-level overview of some ways in which logical representations and reasoning can be used in computer vision applications, such as tracking and scene interpretation. The combination of logical and statistical approaches is also considered.

Cite as

Brandon Bennett. Combining Logic and Probability in Tracking and Scene Interpretation. In Logic and Probability for Scene Interpretation. Dagstuhl Seminar Proceedings, Volume 8091, pp. 1-7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{bennett:DagSemProc.08091.7,
  author =	{Bennett, Brandon},
  title =	{{Combining Logic and Probability in  Tracking and Scene Interpretation}},
  booktitle =	{Logic and Probability for Scene Interpretation},
  pages =	{1--7},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8091},
  editor =	{Anthony G. Cohn and David C. Hogg and Ralf M\"{o}ller and Bernd Neumann},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08091.7},
  URN =		{urn:nbn:de:0030-drops-16120},
  doi =		{10.4230/DagSemProc.08091.7},
  annote =	{Keywords: Vision, Tracking, Logic, Probability, Spatio-Temporal Continuity}
}
Document
Implementing probabilistic description logics: An application to image interpretation

Authors: Ralf Möller and Tobias H. Näth

Published in: Dagstuhl Seminar Proceedings, Volume 8091, Logic and Probability for Scene Interpretation (2008)


Abstract
This paper presents an application of an optimized implementation of a probabilistic description logic defined by Giugno and Lukasiewicz [9] to the domain of image interpretation. This approach extends a description logic with so-called probabilistic constraints to allow for automated reasoning over formal ontologies in combination with probabilistic knowledge. We analyze the performance of current algorithms and investigate new optimization techniques.

Cite as

Ralf Möller and Tobias H. Näth. Implementing probabilistic description logics: An application to image interpretation. In Logic and Probability for Scene Interpretation. Dagstuhl Seminar Proceedings, Volume 8091, pp. 1-6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{moller_et_al:DagSemProc.08091.8,
  author =	{M\"{o}ller, Ralf and N\"{a}th, Tobias H.},
  title =	{{Implementing probabilistic description logics: An application to image interpretation}},
  booktitle =	{Logic and Probability for Scene Interpretation},
  pages =	{1--6},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8091},
  editor =	{Anthony G. Cohn and David C. Hogg and Ralf M\"{o}ller and Bernd Neumann},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08091.8},
  URN =		{urn:nbn:de:0030-drops-16186},
  doi =		{10.4230/DagSemProc.08091.8},
  annote =	{Keywords: Probabilistic description logics, image interpretation probabilistic lexicographic entailment}
}
Document
Learning Grammatical Models for Object Recognition

Authors: Meg Aycinena Lippow, Leslie Pack Kaelbling, and Tomas Lozano-Perez

Published in: Dagstuhl Seminar Proceedings, Volume 8091, Logic and Probability for Scene Interpretation (2008)


Abstract
Many object recognition systems are limited by their inability to share common parts or structure among related object classes. This capability is desirable because it allows information about parts and relationships in one object class to be generalized to other classes for which it is relevant. This ability has the potential to allow effective parameter learning from fewer examples and better generalization of the learned models to unseen instances, and it enables more efficient recognition. With this goal in mind, we have designed a representation and recognition framework that captures structural variability and shared part structure within and among object classes. The framework uses probabilistic geometric grammars (PGGs) to represent object classes recursively in terms of their parts, thereby exploiting the hierarchical and substitutive structure inherent to many types of objects. To incorporate geometric and appearance information, we extend traditional probabilistic context-free grammars to represent distributions over the relative geometric characteristics of object parts as well as the appearance of primitive parts. We describe an efficient dynamic programming algorithm for object categorization and localization in images given a PGG model. We also develop an EM algorithm to estimate the parameters of a grammar structure from training data, and a search-based structure learning approach that finds a compact grammar to explain the image data while sharing substructure among classes. Finally, we describe a set of experiments that demonstrate empirically that the system provides a performance benefit.

Cite as

Meg Aycinena Lippow, Leslie Pack Kaelbling, and Tomas Lozano-Perez. Learning Grammatical Models for Object Recognition. In Logic and Probability for Scene Interpretation. Dagstuhl Seminar Proceedings, Volume 8091, pp. 1-15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{aycinenalippow_et_al:DagSemProc.08091.9,
  author =	{Aycinena Lippow, Meg and Kaelbling, Leslie Pack and Lozano-Perez, Tomas},
  title =	{{Learning Grammatical Models for Object Recognition}},
  booktitle =	{Logic and Probability for Scene Interpretation},
  pages =	{1--15},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8091},
  editor =	{Anthony G. Cohn and David C. Hogg and Ralf M\"{o}ller and Bernd Neumann},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08091.9},
  URN =		{urn:nbn:de:0030-drops-16113},
  doi =		{10.4230/DagSemProc.08091.9},
  annote =	{Keywords: Object recognition, grammars, structure learning}
}
Document
Probabilistic Scene Modeling for Situated Computer Vision

Authors: Sven Wachsmuth and Agnes Swadzba

Published in: Dagstuhl Seminar Proceedings, Volume 8091, Logic and Probability for Scene Interpretation (2008)


Abstract
Verbal statements and vision are a rich source of information in a human-machine interaction scenario. For this reason Situated Computer Vision aims to include knowledge about the communicative situation in which it takes place. This paper presents three approaches how to achieve scene models of such scenarios combining different modalities. Seeing (planar) scenes as configurations of parts leads to a probabilistic modeling with Bayes’ nets relating spoken utterances with results of an object recognition step. In the second approach parallel datasets form the basis for analyzing the statistical dependencies between them through learning a statistical translation model which maps between these datasets (here: words in a text and boundary fragments extracted in 2D images). The third approach deals with complex indoor scenes from which 3D data is acquired. Planar structures in the 3D points and statistics extracted on these planar patches describe the coarse spatial layouts of different indoor room types in such a way that a holistic classification scheme can be provided.

Cite as

Sven Wachsmuth and Agnes Swadzba. Probabilistic Scene Modeling for Situated Computer Vision. In Logic and Probability for Scene Interpretation. Dagstuhl Seminar Proceedings, Volume 8091, pp. 1-15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{wachsmuth_et_al:DagSemProc.08091.10,
  author =	{Wachsmuth, Sven and Swadzba, Agnes},
  title =	{{Probabilistic Scene Modeling for Situated Computer Vision}},
  booktitle =	{Logic and Probability for Scene Interpretation},
  pages =	{1--15},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8091},
  editor =	{Anthony G. Cohn and David C. Hogg and Ralf M\"{o}ller and Bernd Neumann},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08091.10},
  URN =		{urn:nbn:de:0030-drops-16097},
  doi =		{10.4230/DagSemProc.08091.10},
  annote =	{Keywords: Scene Modeling, Human Robot Interaction}
}
Document
Qualitative Abstraction and Inherent Uncertainty in Scene Recognition

Authors: Carsten Elfers, Otthein Herzog, Andrea Miene, and Thomas Wagner

Published in: Dagstuhl Seminar Proceedings, Volume 8091, Logic and Probability for Scene Interpretation (2008)


Abstract
The interpretation of scenes, e.g., in videos, is demanding at all levels. At the image processing level it is necessary to apply an "intelligent" segmentation and to determine the objects of interest. For the higher symbolic levels it is a challenging task to perform the transition between quantitative and qualitative data and to determine the relations between objects. Here we assume that the position of objects ("agents") in images and videos will already be determined as a minimal requirement for the further analysis. The interpretation of complex and dynamic scenes with embedded intentional agents is one of the most challenging tasks in current AI and imposes highly heterogeneous requirements. A key problem is the efficient and robust representation of uncertainty. We propose that uncertainty should be distinguished with respect to two different epistemological sources: (1) noisy sensor information and (2) ignorance. In this presentation we propose possible solutions to this class of problems. The use and evaluation of sensory information in the field of robotics shows impressive results especially in the fields of localization (e.g. MCL) and map building (e.g. SLAM) but also imposes serious problems on the successive higher levels of processing due to the probabilistic nature. In this presentation we propose that the use of (a) qualitative abstraction (classic approach) from quantitative to (at least partial) qualitative representations and (b) coherence-based perception validation based on Dempster-Shafer (DST) can help to reduce the problem significantly. The second important probability problem class that will be addressed is ignorance. In our presentation we will focus on reducing missing information by inference. We contrast/compare our experiences in an important field of scene interpretation namely plan and intention recognition. The first approach is based on a logical abductive approach and the second approach in contrast uses a probabilistic approach (Relational Hidden Markov Model (RHMM)).

Cite as

Carsten Elfers, Otthein Herzog, Andrea Miene, and Thomas Wagner. Qualitative Abstraction and Inherent Uncertainty in Scene Recognition. In Logic and Probability for Scene Interpretation. Dagstuhl Seminar Proceedings, Volume 8091, pp. 1-15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{elfers_et_al:DagSemProc.08091.11,
  author =	{Elfers, Carsten and Herzog, Otthein and Miene, Andrea and Wagner, Thomas},
  title =	{{Qualitative Abstraction and Inherent Uncertainty in Scene Recognition}},
  booktitle =	{Logic and Probability for Scene Interpretation},
  pages =	{1--15},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8091},
  editor =	{Anthony G. Cohn and David C. Hogg and Ralf M\"{o}ller and Bernd Neumann},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08091.11},
  URN =		{urn:nbn:de:0030-drops-16141},
  doi =		{10.4230/DagSemProc.08091.11},
  annote =	{Keywords: Scene interpretation, intentional agents, uncertainty, qualitative abstraction, coherence-based perception, abduction, RHMM}
}
  • Refine by Type
  • 20 Document/PDF
  • 4 Document/HTML

  • Refine by Publication Year
  • 3 2025
  • 1 2023
  • 16 2008

  • Refine by Author
  • 3 Möller, Ralf
  • 2 Neumann, Bernd
  • 1 Aycinena Lippow, Meg
  • 1 Belmecheri, Nassim
  • 1 Bennett, Brandon
  • Show More...

  • Refine by Series/Journal
  • 2 LIPIcs
  • 2 TGDK
  • 16 DagSemProc

  • Refine by Classification
  • 2 Computing methodologies → Spatial and physical reasoning
  • 2 Information systems → Graph-based database models
  • 1 Applied computing → Cartography
  • 1 Computing methodologies → Artificial intelligence
  • 1 Computing methodologies → Scene understanding
  • Show More...

  • Refine by Keyword
  • 2 Logic
  • 2 Scene interpretation
  • 2 Vision
  • 1 Abstraction
  • 1 Autonomous Driving;
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail