3 Search Results for "Pack Kaelbling, Leslie"


Document
Learning Grammatical Models for Object Recognition

Authors: Meg Aycinena Lippow, Leslie Pack Kaelbling, and Tomas Lozano-Perez

Published in: Dagstuhl Seminar Proceedings, Volume 8091, Logic and Probability for Scene Interpretation (2008)


Abstract
Many object recognition systems are limited by their inability to share common parts or structure among related object classes. This capability is desirable because it allows information about parts and relationships in one object class to be generalized to other classes for which it is relevant. This ability has the potential to allow effective parameter learning from fewer examples and better generalization of the learned models to unseen instances, and it enables more efficient recognition. With this goal in mind, we have designed a representation and recognition framework that captures structural variability and shared part structure within and among object classes. The framework uses probabilistic geometric grammars (PGGs) to represent object classes recursively in terms of their parts, thereby exploiting the hierarchical and substitutive structure inherent to many types of objects. To incorporate geometric and appearance information, we extend traditional probabilistic context-free grammars to represent distributions over the relative geometric characteristics of object parts as well as the appearance of primitive parts. We describe an efficient dynamic programming algorithm for object categorization and localization in images given a PGG model. We also develop an EM algorithm to estimate the parameters of a grammar structure from training data, and a search-based structure learning approach that finds a compact grammar to explain the image data while sharing substructure among classes. Finally, we describe a set of experiments that demonstrate empirically that the system provides a performance benefit.

Cite as

Meg Aycinena Lippow, Leslie Pack Kaelbling, and Tomas Lozano-Perez. Learning Grammatical Models for Object Recognition. In Logic and Probability for Scene Interpretation. Dagstuhl Seminar Proceedings, Volume 8091, pp. 1-15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{aycinenalippow_et_al:DagSemProc.08091.9,
  author =	{Aycinena Lippow, Meg and Kaelbling, Leslie Pack and Lozano-Perez, Tomas},
  title =	{{Learning Grammatical Models for Object Recognition}},
  booktitle =	{Logic and Probability for Scene Interpretation},
  pages =	{1--15},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8091},
  editor =	{Anthony G. Cohn and David C. Hogg and Ralf M\"{o}ller and Bernd Neumann},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08091.9},
  URN =		{urn:nbn:de:0030-drops-16113},
  doi =		{10.4230/DagSemProc.08091.9},
  annote =	{Keywords: Object recognition, grammars, structure learning}
}
Document
Learning Probabilistic Relational Dynamics for Multiple Tasks

Authors: Ashwin Deshpande, Brian Milch, Luke S. Zettlemoyer, and Leslie Pack Kaelbling

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
The ways in which an agent's actions affect the world can often be modeled compactly using a set of relational probabilistic planning rules. This extended abstract addresses the problem of learning such rule sets for multiple related tasks. We take a hierarchical Bayesian approach, in which the system learns a prior distribution over rule sets. We present a class of prior distributions parameterized by a rule set prototype that is stochastically modified to produce a task-specific rule set. We also describe a coordinate ascent algorithm that iteratively optimizes the task-specific rule sets and the prior distribution. Experiments using this algorithm show that transferring information from related tasks significantly reduces the amount of training data required to predict action effects in blocks-world domains.

Cite as

Ashwin Deshpande, Brian Milch, Luke S. Zettlemoyer, and Leslie Pack Kaelbling. Learning Probabilistic Relational Dynamics for Multiple Tasks. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{deshpande_et_al:DagSemProc.07161.4,
  author =	{Deshpande, Ashwin and Milch, Brian and Zettlemoyer, Luke S. and Kaelbling, Leslie Pack},
  title =	{{Learning Probabilistic Relational Dynamics for Multiple Tasks}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--10},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.4},
  URN =		{urn:nbn:de:0030-drops-13846},
  doi =		{10.4230/DagSemProc.07161.4},
  annote =	{Keywords: Hierarchical Bayesian models, transfer learning, multi-task learning, probabilistic planning rules}
}
Document
Logical Particle Filtering

Authors: Luke S. Zettlemoyer, Hanna M. Pasula, and Leslie Pack Kaelbling

Published in: Dagstuhl Seminar Proceedings, Volume 7161, Probabilistic, Logical and Relational Learning - A Further Synthesis (2008)


Abstract
In this paper, we consider the problem of filtering in relational hidden Markov models. We present a compact representation for such models and an associated logical particle filtering algorithm. Each particle contains a logical formula that describes a set of states. The algorithm updates the formulae as new observations are received. Since a single particle tracks many states, this filter can be more accurate than a traditional particle filter in high dimensional state spaces, as we demonstrate in experiments.

Cite as

Luke S. Zettlemoyer, Hanna M. Pasula, and Leslie Pack Kaelbling. Logical Particle Filtering. In Probabilistic, Logical and Relational Learning - A Further Synthesis. Dagstuhl Seminar Proceedings, Volume 7161, pp. 1-14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{zettlemoyer_et_al:DagSemProc.07161.5,
  author =	{Zettlemoyer, Luke S. and Pasula, Hanna M. and Pack Kaelbling, Leslie},
  title =	{{Logical Particle Filtering}},
  booktitle =	{Probabilistic, Logical and Relational Learning - A Further Synthesis},
  pages =	{1--14},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7161},
  editor =	{Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07161.5},
  URN =		{urn:nbn:de:0030-drops-13792},
  doi =		{10.4230/DagSemProc.07161.5},
  annote =	{Keywords: Particle filter, logical hidden Markov model}
}
  • Refine by Author
  • 2 Kaelbling, Leslie Pack
  • 2 Zettlemoyer, Luke S.
  • 1 Aycinena Lippow, Meg
  • 1 Deshpande, Ashwin
  • 1 Lozano-Perez, Tomas
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 1 Hierarchical Bayesian models
  • 1 Object recognition
  • 1 Particle filter
  • 1 grammars
  • 1 logical hidden Markov model
  • Show More...

  • Refine by Type
  • 3 document

  • Refine by Publication Year
  • 3 2008

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail