Dagstuhl Seminar Proceedings, Volume 10081



Publication Details

  • published at: 2010-10-27
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik

Access Numbers

Documents

No documents found matching your filter selection.
Document
10081 Abstracts Collection – Cognitive Robotics

Authors: Gerhard Lakemeyer, Hector J. Levesque, and Fiora Pirri


Abstract
From 21.02. to 26.02.2010, the Dagstuhl Seminar 10081 ``Cognitive Robotics '' was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Gerhard Lakemeyer, Hector J. Levesque, and Fiora Pirri. 10081 Abstracts Collection – Cognitive Robotics. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, pp. 1-19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{lakemeyer_et_al:DagSemProc.10081.1,
  author =	{Lakemeyer, Gerhard and Levesque, Hector J. and Pirri, Fiora},
  title =	{{10081 Abstracts Collection – Cognitive Robotics}},
  booktitle =	{Cognitive Robotics},
  pages =	{1--19},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10081},
  editor =	{Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.1},
  URN =		{urn:nbn:de:0030-drops-27776},
  doi =		{10.4230/DagSemProc.10081.1},
  annote =	{Keywords: Cognitive roboticsm, Knowledge representation and reasoning, Machine learning, Cognitive science, Cognitive vision}
}
Document
A Constraint-Based Approach for Plan Management in Intelligent Environments

Authors: Federico Pecora and Marcello Cirillo


Abstract
In this paper we address the problem of realizing a service-providing reasoning infrastructure for proactive human assistance in intelligent environments. We propose SAM, an architecture which leverages temporal knowledge represented as relations in Allen’s interval algebra and constraint-based temporal planning techniques. SAM seamlessly combines two key capabilities for contextualized service provision, namely human activity recognition and planning for controlling pervasive actuation devices.

Cite as

Federico Pecora and Marcello Cirillo. A Constraint-Based Approach for Plan Management in Intelligent Environments. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{pecora_et_al:DagSemProc.10081.2,
  author =	{Pecora, Federico and Cirillo, Marcello},
  title =	{{A Constraint-Based Approach for Plan Management in Intelligent Environments}},
  booktitle =	{Cognitive Robotics},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10081},
  editor =	{Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.2},
  URN =		{urn:nbn:de:0030-drops-26358},
  doi =		{10.4230/DagSemProc.10081.2},
  annote =	{Keywords: }
}
Document
Attending to Motion: an object-based approach

Authors: Anna Belardinelli


Abstract
Visual attention is the biological mechanism allowing to turn mere sensing into conscious perception. In this process, object-based modulation of attention provides a further layer between low-level space/feature-based region selection and full object recognition. In this context, motion is a very powerful feature, naturally attracting our gaze and yielding rapid and effective shape distinction. Moving from a pixel-based account of attention to the definition of proto-objects as perceptual units labelled with a single saliency value, we present a framework for the selection of moving objects within cluttered scenes. Through segmentation of motion energy features, the system extracts coherently moving proto-objects defining them as consistently moving blobs and produces an object saliency map, by evaluating bottom-up distinctiveness of each object candidate with respect to its surroundings, in a center-surround fashion.

Cite as

Anna Belardinelli. Attending to Motion: an object-based approach. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, pp. 1-11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{belardinelli:DagSemProc.10081.3,
  author =	{Belardinelli, Anna},
  title =	{{Attending to Motion: an object-based approach}},
  booktitle =	{Cognitive Robotics},
  pages =	{1--11},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10081},
  editor =	{Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.3},
  URN =		{urn:nbn:de:0030-drops-26285},
  doi =		{10.4230/DagSemProc.10081.3},
  annote =	{Keywords: Visual attention model, motion selection, saliency map}
}
Document
Attentive Monitoring and Adaptive Control in Cognitive Robotics

Authors: E. Burattini, Alberto Finzi, S. Rossi, and Maria Carla Staffa


Abstract
In this work, we present an attentional system for a robotic agent capable of adapting its emergent behavior to the surrounding environment and to its internal state. In this framework, the agent is endowed with simple attentional mechanisms regulating the frequencies of sensory readings and behavior activations. The process of changing the frequency of sensory readings is interpreted as an increase or decrease of attention towards relevant behaviors and particular aspects of the external environment. In this paper, we present our framework discussing several case studies considering incrementally complex behaviors and tasks.

Cite as

E. Burattini, Alberto Finzi, S. Rossi, and Maria Carla Staffa. Attentive Monitoring and Adaptive Control in Cognitive Robotics. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{burattini_et_al:DagSemProc.10081.4,
  author =	{Burattini, E. and Finzi, Alberto and Rossi, S. and Staffa, Maria Carla},
  title =	{{Attentive Monitoring and Adaptive Control in Cognitive Robotics}},
  booktitle =	{Cognitive Robotics},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10081},
  editor =	{Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.4},
  URN =		{urn:nbn:de:0030-drops-26322},
  doi =		{10.4230/DagSemProc.10081.4},
  annote =	{Keywords: Attention, behavior-based control, robotics}
}
Document
Cognitive Robotics

Authors: Hector J. Levesque and Gerhard Lakemeyer


Abstract
This chapter is dedicated to the memory of Ray Reiter. It is also an overview of cognitive robotics, as we understand it to have been envisaged by him.1 Of course, nobody can control the use of a term or the direction of research. We apologize in advance to those who feel that other approaches to cognitive robotics and related problems are inadequately represented here.

Cite as

Hector J. Levesque and Gerhard Lakemeyer. Cognitive Robotics. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, pp. 1-19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{levesque_et_al:DagSemProc.10081.5,
  author =	{Levesque, Hector J. and Lakemeyer, Gerhard},
  title =	{{Cognitive Robotics}},
  booktitle =	{Cognitive Robotics},
  pages =	{1--19},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10081},
  editor =	{Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.5},
  URN =		{urn:nbn:de:0030-drops-26335},
  doi =		{10.4230/DagSemProc.10081.5},
  annote =	{Keywords: }
}
Document
Combining Planning and Motion Planning

Authors: Jaesik Choi and Eyal Amir


Abstract
Robotic manipulation is important for real, physical world applications. General Purpose manipulation with a robot (eg. delivering dishes, opening doors with a key, etc.) is demanding. It is hard because (1) objects are constrained in position and orientation, (2) many non-spatial constraints interact (or interfere) with each other, and (3) robots may have multidegree of freedoms (DOF). In this paper we solve the problem of general purpose robotic manipulation using a novel combination of planning and motion planning. Our approach integrates motions of a robot with other (non-physical or external-to-robot) actions to achieve a goal while manipulating objects. It differs from previous, hierarchical approaches in that (a) it considers kinematic constraints in configuration space (C-space) together with constraints over object manipulations; (b) it automatically generates high-level (logical) actions from a C-space based motion planning algorithm; and (c) it decomposes a planning problem into small segments, thus reducing the complexity of planning.

Cite as

Jaesik Choi and Eyal Amir. Combining Planning and Motion Planning. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{choi_et_al:DagSemProc.10081.6,
  author =	{Choi, Jaesik and Amir, Eyal},
  title =	{{Combining Planning and Motion Planning}},
  booktitle =	{Cognitive Robotics},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10081},
  editor =	{Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.6},
  URN =		{urn:nbn:de:0030-drops-26294},
  doi =		{10.4230/DagSemProc.10081.6},
  annote =	{Keywords: Motion Planning, Factored Planning, Robotic arm}
}
Document
Coming up With Good Excuses: What to do When no Plan Can be Found

Authors: Moritz Göbeldecker, Thomas Keller, Patrick Eyerich, Michael Brenner, and Bernhard Nebel


Abstract
can go wrong. First and foremost, an agent might fail to execute one of the planned actions for some reasons. Even more annoying, however, is a situation where the agent is incompetent, i.e., unable to come up with a plan. This might be due to the fact that there are principal reasons that prohibit a successful plan or simply because the task’s description is incomplete or incorrect. In either case, an explanation for such a failure would be very helpful. We will address this problem and provide a formalization of coming up with excuses for not being able to find a plan. Based on that, we will present an algorithm that is able to find excuses and demonstrate that such excuses can be found in practical settings in reasonable time.

Cite as

Moritz Göbeldecker, Thomas Keller, Patrick Eyerich, Michael Brenner, and Bernhard Nebel. Coming up With Good Excuses: What to do When no Plan Can be Found. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{gobeldecker_et_al:DagSemProc.10081.7,
  author =	{G\"{o}beldecker, Moritz and Keller, Thomas and Eyerich, Patrick and Brenner, Michael and Nebel, Bernhard},
  title =	{{Coming up With Good Excuses: What to do When no Plan Can be Found}},
  booktitle =	{Cognitive Robotics},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10081},
  editor =	{Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.7},
  URN =		{urn:nbn:de:0030-drops-27739},
  doi =		{10.4230/DagSemProc.10081.7},
  annote =	{Keywords: Planning, knowledge representation}
}
Document
Exploiting Spatial and Temporal Flexibility for Exploiting Spatial and Temporal Flexibility for Plan Execution of Hybrid, Under-actuated Systems

Authors: Andreas G. Hofmann and Brian C. Williams


Abstract
Robotic devices, such as rovers and autonomous spacecraft, have been successfully controlled by plan execution systems that use plans with temporal flexibility to dynamically adapt to temporal disturbances. To date these execution systems apply to discrete systems that abstract away the detailed dynamic constraints of the controlled device. To control dynamic, under-actuated devices, such as agile bipedal walking machines, we extend this execution paradigm to incorporate detailed dynamic constraints. Building upon prior work on dispatchable plan execution, we introduce a novel approach to flexible plan execution of hybrid under-actuated systems that achieves robustness by exploiting spatial as well as temporal plan flexibility. To accomplish this, we first transform the high-dimensional system into a set of low dimensional, weakly coupled systems. Second, to coordinate these systems such that they achieve the plan in real-time, we compile a plan into a concurrent timed flow tube description. This description represents all feasible control trajectories and their temporal coordination constraints, such that each trajectory satisfies all plan and dynamic constraints. Finally, the problem of runtime plan dispatching is reduced to maintaining state trajectories in their associated flow tubes, while satisfying the coordination constraints. This is accomplished through an efficient local search algorithm that adjusts a small number of control parameters in real-time. The first step has been published previously; this paper focuses on the last two steps. The approach is validated on the execution of a set of bipedal walking plans, using a high fidelity simulation of a biped.

Cite as

Andreas G. Hofmann and Brian C. Williams. Exploiting Spatial and Temporal Flexibility for Exploiting Spatial and Temporal Flexibility for Plan Execution of Hybrid, Under-actuated Systems. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{hofmann_et_al:DagSemProc.10081.8,
  author =	{Hofmann, Andreas G. and Williams, Brian C.},
  title =	{{Exploiting Spatial and Temporal Flexibility for Exploiting Spatial and Temporal Flexibility for Plan Execution of Hybrid, Under-actuated Systems}},
  booktitle =	{Cognitive Robotics},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10081},
  editor =	{Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.8},
  URN =		{urn:nbn:de:0030-drops-27740},
  doi =		{10.4230/DagSemProc.10081.8},
  annote =	{Keywords: }
}
Document
golog.lua: Towards a Non-Prolog Implementation of Golog for Embedded Systems

Authors: Alexander Ferrein


Abstract
Among many approaches to address the high-level decision making problem for autonomous robots and agents, the robot program¬ming and plan language Golog follows a logic-based deliberative approach, and its successors were successfully deployed in a number of robotics applications over the past ten years. Usually, Golog interpreter are implemented in Prolog, which is not available for our target plat¬form, the bi-ped robot platform Nao. In this paper we sketch our first approach towards a prototype implementation of a Golog interpreter in the scripting language Lua. With the example of the elevator domain we discuss how the basic action theory is specified and how we implemented fluent regression in Lua. One possible advantage of the availability of a Non-Prolog implementation of Golog could be that Golog becomes avail¬able on a larger number of platforms, and also becomes more attractive for roboticists outside the Cognitive Robotics community.

Cite as

Alexander Ferrein. golog.lua: Towards a Non-Prolog Implementation of Golog for Embedded Systems. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, pp. 1-15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{ferrein:DagSemProc.10081.9,
  author =	{Ferrein, Alexander},
  title =	{{golog.lua: Towards a Non-Prolog Implementation of Golog for Embedded Systems}},
  booktitle =	{Cognitive Robotics},
  pages =	{1--15},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10081},
  editor =	{Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.9},
  URN =		{urn:nbn:de:0030-drops-26317},
  doi =		{10.4230/DagSemProc.10081.9},
  annote =	{Keywords: Action and change, high-level control, robotics}
}
Document
Improving the Performance of Complex Agent Plans Through Reinforcement Learning

Authors: Matteo Leonetti and Luca Iocchi


Abstract
Agent programming in complex, partially observable, and stochastic domains usually requires a great deal of understanding of both the domain and the task in order to provide the agent with the knowledge necessary to act effectively. While symbolic methods allow the designer to specify declarative knowledge about the domain, the resulting plan can be brittle since it is difficult to supply a symbolic model that is accurate enough to foresee all possible events in complex environments, especially in the case of partial observability. Reinforcement Learning (RL) techniques, on the other hand, can learn a policy and make use of a learned model, but it is difficult to reduce and shape the scope of the learning algorithm by exploiting a priori information. We propose a methodology for writing complex agent programs that can be effectively improved through experience.We show how to derive a stochastic process from a partial specification of the plan, so that the latter’s perfomance can be improved solving a RL problem much smaller than classical RL formulations. Finally, we demonstrate our approach in the context of Keepaway Soccer, a common RL benchmark based on a RoboCup Soccer 2D simulator.

Cite as

Matteo Leonetti and Luca Iocchi. Improving the Performance of Complex Agent Plans Through Reinforcement Learning. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, pp. 1-17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{leonetti_et_al:DagSemProc.10081.10,
  author =	{Leonetti, Matteo and Iocchi, Luca},
  title =	{{Improving the Performance of Complex Agent Plans Through Reinforcement Learning}},
  booktitle =	{Cognitive Robotics},
  pages =	{1--17},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10081},
  editor =	{Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.10},
  URN =		{urn:nbn:de:0030-drops-26347},
  doi =		{10.4230/DagSemProc.10081.10},
  annote =	{Keywords: Agent programming, planning, reinforcement learning, semi non-Markov decision process}
}
Document
Modeling the Observed Behavior of a Robot through Machine Learning

Authors: Malik Ghallab


Abstract
Artificial systems are becoming more and more complex, almost as complex in some cases as natural systems. Up to now, the typical engineering question was "how do I design my system to behave according to some specifications". However, the incremental design process is leading to so complex artifacts that engineers are more and more addressing a quite different issue of "how do I model the observed behavior of my system". Engineers are faced with the same problem as scientists studying natural phenomena. It may sound strange for an engineer to engage in observing and modeling what a system is doing, since this should be inferable from the models used in the system's design stage. However, a modular design of a complex artifact develops only local models that are combined on the basis of some composition principle of these models; it seldom provides global behavior models.

Cite as

Malik Ghallab. Modeling the Observed Behavior of a Robot through Machine Learning. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, p. 1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{ghallab:DagSemProc.10081.11,
  author =	{Ghallab, Malik},
  title =	{{Modeling the Observed Behavior of a Robot through Machine Learning}},
  booktitle =	{Cognitive Robotics},
  pages =	{1--1},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10081},
  editor =	{Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.11},
  URN =		{urn:nbn:de:0030-drops-26379},
  doi =		{10.4230/DagSemProc.10081.11},
  annote =	{Keywords: Robotics, Machine Learning}
}
Document
On First-Order Definability and Computability of Progression for Local-Effect Actions and Beyond

Authors: Yongmei Liu and Gerhard Lakemeyer


Abstract
In a seminal paper, Lin and Reiter introduced the notion of progression for basic action theories in the situation calculus. Unfortunately, progression is not first-order definable in general. Recently, Vassos, Lakemeyer, and Levesque showed that in case actions have only local effects, progression is firstorder representable. However, they could show computability of the first-order representation only for a restricted class. Also, their proofs were quite involved. In this paper, we present a result stronger than theirs that for local-effect actions, progression is always first-order definable and computable. We give a very simple proof for this via the concept of forgetting. We also show first-order definability and computability results for a class of knowledge bases and actions with non-local effects. Moreover, for a certain class of local-effect actions and knowledge bases for representing disjunctive information, we show that progression is not only firstorder definable but also efficiently computable.

Cite as

Yongmei Liu and Gerhard Lakemeyer. On First-Order Definability and Computability of Progression for Local-Effect Actions and Beyond. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, pp. 1-7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:DagSemProc.10081.12,
  author =	{Liu, Yongmei and Lakemeyer, Gerhard},
  title =	{{On First-Order Definability and Computability of Progression for Local-Effect Actions and Beyond}},
  booktitle =	{Cognitive Robotics},
  pages =	{1--7},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10081},
  editor =	{Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.12},
  URN =		{urn:nbn:de:0030-drops-26380},
  doi =		{10.4230/DagSemProc.10081.12},
  annote =	{Keywords: Action and change, knowledge representation}
}
Document
Research with Collaborative Unmanned Aircraft Systems

Authors: Patrick Doherty, Jonas Kvarnström, Fredrik Heintz, D. Landen, and P.-M. Olsson


Abstract
We provide an overview of ongoing research which targets development of a principled framework for mixed-initiative interaction with unmanned aircraft systems (UAS). UASs are now becoming technologically mature enough to be integrated into civil society. Principled interaction between UASs and human resources is an essential component in their future uses in complex emergency services or bluelight scenarios. In our current research, we have targeted a triad of fundamental, interdependent conceptual issues: delegation, mixed- initiative interaction and adjustable autonomy, that is being used as a basis for developing a principled and well-defined framework for interaction. This can be used to clarify, validate and verify different types of interaction between human operators and UAS systems both theoretically and practically in UAS experimentation with our deployed platforms.

Cite as

Patrick Doherty, Jonas Kvarnström, Fredrik Heintz, D. Landen, and P.-M. Olsson. Research with Collaborative Unmanned Aircraft Systems. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, pp. 1-14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{doherty_et_al:DagSemProc.10081.13,
  author =	{Doherty, Patrick and Kvarnstr\"{o}m, Jonas and Heintz, Fredrik and Landen, D. and Olsson, P.-M.},
  title =	{{Research with Collaborative Unmanned Aircraft Systems}},
  booktitle =	{Cognitive Robotics},
  pages =	{1--14},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10081},
  editor =	{Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.13},
  URN =		{urn:nbn:de:0030-drops-26300},
  doi =		{10.4230/DagSemProc.10081.13},
  annote =	{Keywords: Multi-agent systems, robotics, human-robot interaction, delegation}
}
Document
Robot Learning Constrained by Planning and Reasoning

Authors: Claude Sammut, Raymond Sheh, and Tak Fai Yi


Abstract
Robot learning is usually done by trial-anderror or learning by example. Neither of these methods takes advantage of prior knowledge or of any ability to reason about actions. We describe two learning systems. In the first, we learn a model of a robot's actions. This is used in simulation to search for a sequence of actions that achieves the goal of traversing rough terrain. Further learning is used to compress the results of this search into a set of situation-action rules. In the second system, we assume the robot has some knowledge of the effects of actions and can use these to plan a sequence of actions. The qualitative states that result from the plan are used as constraints for trial-and-error learning. This approach greatly reduces the number of trials required by the learner. The method is demonstrated on the problem of a bipedal robot learning to walk.

Cite as

Claude Sammut, Raymond Sheh, and Tak Fai Yi. Robot Learning Constrained by Planning and Reasoning. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, pp. 1-5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{sammut_et_al:DagSemProc.10081.14,
  author =	{Sammut, Claude and Sheh, Raymond and Yi, Tak Fai},
  title =	{{Robot Learning Constrained by Planning and Reasoning}},
  booktitle =	{Cognitive Robotics},
  pages =	{1--5},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10081},
  editor =	{Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.14},
  URN =		{urn:nbn:de:0030-drops-28163},
  doi =		{10.4230/DagSemProc.10081.14},
  annote =	{Keywords: }
}
Document
Self-Maintenance for Autonomous Robots in the Situation Calculus

Authors: Stefan Schiffer, Andreas Wortmann, and Gerhard Lakemeyer


Abstract
In order to make a robot execute a given task plan more robustly we want to enable it to take care of its self-maintenance requirements during online execution of this program. This requires the robot to know about the (internal) states of its components, constraints that restrict execution of certain actions and possibly also how to recover from faulty situations. The general idea is to implement a transformation process on the plans, which are specified in the agent programming language ReadyLog, to be performed based on explicit (temporal) constraints. Afterwards, a ’guarded’ execution of the transformed program should result in more robust behavior.

Cite as

Stefan Schiffer, Andreas Wortmann, and Gerhard Lakemeyer. Self-Maintenance for Autonomous Robots in the Situation Calculus. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{schiffer_et_al:DagSemProc.10081.15,
  author =	{Schiffer, Stefan and Wortmann, Andreas and Lakemeyer, Gerhard},
  title =	{{Self-Maintenance for Autonomous Robots in the Situation Calculus}},
  booktitle =	{Cognitive Robotics},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10081},
  editor =	{Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.15},
  URN =		{urn:nbn:de:0030-drops-26363},
  doi =		{10.4230/DagSemProc.10081.15},
  annote =	{Keywords: Domestic mobile robotics, self-maintenance, robustness}
}
Document
Stream-Based Reasoning in DyKnow

Authors: Fredrik Heintz, Jonas Kvarnström, and Patrick Doherty


Abstract
The information available to modern autonomous systems is often in the form of streams. As the number of sensors and other stream sources increases there is a growing need for incremental reasoning about the incomplete content of sets of streams in order to draw relevant conclusions and react to new situations as quickly as possible. To act rationally, autonomous agents often depend on high level reasoning components that require crisp, symbolic knowledge about the environment. Extensive processing at many levels of abstraction is required to generate such knowledge from noisy, incomplete and quantitative sensor data. We define knowledge processing middleware as a systematic approach to integrating and organizing such processing, and argue that connecting processing components with streams provides essential support for steady and timely flows of information. DyKnow is a concrete and implemented instantiation of such middleware, providing support for stream reasoning at several levels. First, the formal kpl language allows the specification of streams connecting knowledge processes and the required properties of such streams. Second, chronicle recognition incrementally detects complex events from streams of more primitive events. Third, complex metric temporal formulas can be incrementally evaluated over streams of states. DyKnow and the stream reasoning techniques are described and motivated in the context of a UAV traffic monitoring application.

Cite as

Fredrik Heintz, Jonas Kvarnström, and Patrick Doherty. Stream-Based Reasoning in DyKnow. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{heintz_et_al:DagSemProc.10081.16,
  author =	{Heintz, Fredrik and Kvarnstr\"{o}m, Jonas and Doherty, Patrick},
  title =	{{Stream-Based Reasoning in DyKnow}},
  booktitle =	{Cognitive Robotics},
  pages =	{1--16},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10081},
  editor =	{Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.16},
  URN =		{urn:nbn:de:0030-drops-26274},
  doi =		{10.4230/DagSemProc.10081.16},
  annote =	{Keywords: Knowledge representation, autonomous systems, stream-based reasoning}
}
Document
The GLAIR Cognitive Architecture

Authors: Stuart C. Shapiro and Jonathan P. Bona


Abstract
GLAIR (Grounded Layered Architecture with Integrated Reasoning) is a multi-layered cognitive architecture for embodied agents operating in real, virtual, or simulated environments containing other agents. The highest layer of the GLAIR Architecture, the Knowledge Layer (KL), contains the beliefs of the agent, and is the layer in which conscious reasoning, planning, and act selection is performed. The lowest layer of the GLAIR Architecture, the Sensori-Actuator Layer (SAL), contains the controllers of the sensors and effectors of the hardware or software robot. Between the KL and the SAL is the Perceptuo-Motor Layer (PML), which grounds the KL symbols in perceptual structures and subconscious actions, contains various registers for providing the agent’s sense of situatedness in the environment, and handles translation and communication between the KL and the SAL. The motivation for the development of GLAIR has been “Computational Philosophy”, the computational understanding and implementation of human-level intelligent behavior without necessarily being bound by the actual implementation of the human mind. Nevertheless, the approach has been inspired by human psychology and biology.

Cite as

Stuart C. Shapiro and Jonathan P. Bona. The GLAIR Cognitive Architecture. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, pp. 1-12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{shapiro_et_al:DagSemProc.10081.17,
  author =	{Shapiro, Stuart C. and Bona, Jonathan P.},
  title =	{{The GLAIR Cognitive Architecture}},
  booktitle =	{Cognitive Robotics},
  pages =	{1--12},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10081},
  editor =	{Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.17},
  URN =		{urn:nbn:de:0030-drops-27724},
  doi =		{10.4230/DagSemProc.10081.17},
  annote =	{Keywords: Cognitive Robotics, Cognitive Architectures, Embodiment, Situatedness, Symbol Grounding}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail