6 Search Results for "Paul, Richard"


Document
Nash-Bargaining-Based Models for Matching Markets: One-Sided and Two-Sided; Fisher and Arrow-Debreu

Authors: Mojtaba Hosseini and Vijay V. Vazirani

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
This paper addresses two deficiencies of models in the area of matching-based market design. The first arises from the recent realization that the most prominent solution that uses cardinal utilities, namely the Hylland-Zeckhauser (HZ) mechanism [Hylland and Zeckhauser, 1979], is intractable; computation of even an approximate equilibrium is PPAD-complete [Vazirani and Yannakakis, 2021; Chen et al., 2021]. The second is the extreme paucity of models that use cardinal utilities, in sharp contrast with general equilibrium theory. Our paper addresses both these issues by proposing Nash-bargaining-based matching market models. Since the Nash bargaining solution is captured by a convex program, efficiency follow; in addition, it possesses a number of desirable game-theoretic properties. Our approach yields a rich collection of models: for one-sided as well as two-sided markets, for Fisher as well as Arrow-Debreu settings, and for a wide range of utility functions, all the way from linear to Leontief. We also give very fast implementations for these models which solve large instances, with n = 2000, in one hour on a PC, even for a two-sided matching market. A number of new ideas were needed, beyond the standard methods, to obtain these implementations.

Cite as

Mojtaba Hosseini and Vijay V. Vazirani. Nash-Bargaining-Based Models for Matching Markets: One-Sided and Two-Sided; Fisher and Arrow-Debreu. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 86:1-86:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{hosseini_et_al:LIPIcs.ITCS.2022.86,
  author =	{Hosseini, Mojtaba and Vazirani, Vijay V.},
  title =	{{Nash-Bargaining-Based Models for Matching Markets: One-Sided and Two-Sided; Fisher and Arrow-Debreu}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{86:1--86:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.86},
  URN =		{urn:nbn:de:0030-drops-156821},
  doi =		{10.4230/LIPIcs.ITCS.2022.86},
  annote =	{Keywords: Matching-based market design, Nash bargaining, convex optimization, Frank-Wolfe algorithm, cutting planes, general equilibrium theory, one-sided markets, two-sided markets}
}
Document
Keyboards as a New Model of Computation

Authors: Yoan Géran, Bastien Laboureix, Corto Mascle, and Valentin D. Richard

Published in: LIPIcs, Volume 202, 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)


Abstract
We introduce a new formalisation of language computation, called keyboards. We consider a set of atomic operations (writing a letter, erasing a letter, going to the right or to the left) and we define a keyboard as a set of finite sequences of such operations, called keys. The generated language is the set of words obtained by applying some non-empty sequence of those keys. Unlike classical models of computation, every key can be applied anytime. We define various classes of languages based on different sets of atomic operations, and compare their expressive powers. We also compare them to rational, context-free and context-sensitive languages. We obtain a strict hierarchy of classes, whose expressiveness is orthogonal to the one of the aforementioned classical models. We also study closure properties of those classes, as well as fundamental complexity problems on keyboards.

Cite as

Yoan Géran, Bastien Laboureix, Corto Mascle, and Valentin D. Richard. Keyboards as a New Model of Computation. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 49:1-49:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{geran_et_al:LIPIcs.MFCS.2021.49,
  author =	{G\'{e}ran, Yoan and Laboureix, Bastien and Mascle, Corto and Richard, Valentin D.},
  title =	{{Keyboards as a New Model of Computation}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{49:1--49:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.49},
  URN =		{urn:nbn:de:0030-drops-144896},
  doi =		{10.4230/LIPIcs.MFCS.2021.49},
  annote =	{Keywords: formal languages, models of computation, automata theory}
}
Document
Dynamic Matching Algorithms in Practice

Authors: Monika Henzinger, Shahbaz Khan, Richard Paul, and Christian Schulz

Published in: LIPIcs, Volume 173, 28th Annual European Symposium on Algorithms (ESA 2020)


Abstract
In recent years, significant advances have been made in the design and analysis of fully dynamic maximal matching algorithms. However, these theoretical results have received very little attention from the practical perspective. Few of the algorithms are implemented and tested on real datasets, and their practical potential is far from understood. In this paper, we attempt to bridge the gap between theory and practice that is currently observed for the fully dynamic maximal matching problem. We engineer several algorithms and empirically study those algorithms on an extensive set of dynamic instances.

Cite as

Monika Henzinger, Shahbaz Khan, Richard Paul, and Christian Schulz. Dynamic Matching Algorithms in Practice. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 58:1-58:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{henzinger_et_al:LIPIcs.ESA.2020.58,
  author =	{Henzinger, Monika and Khan, Shahbaz and Paul, Richard and Schulz, Christian},
  title =	{{Dynamic Matching Algorithms in Practice}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{58:1--58:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.58},
  URN =		{urn:nbn:de:0030-drops-129243},
  doi =		{10.4230/LIPIcs.ESA.2020.58},
  annote =	{Keywords: Matching, Dynamic Matching, Blossom Algorithm}
}
Document
Cross-Dictionary Linking at Sense Level with a Double-Layer Classifier

Authors: Roser Saurí, Louis Mahon, Irene Russo, and Mironas Bitinis

Published in: OASIcs, Volume 70, 2nd Conference on Language, Data and Knowledge (LDK 2019)


Abstract
We present a system for linking dictionaries at the sense level, which is part of a wider programme aiming to extend current lexical resources and to create new ones by automatic means. One of the main challenges of the sense linking task is the existence of non one-to-one mappings among senses. Our system handles this issue by addressing the task as a binary classification problem using standard Machine Learning methods, where each sense pair is classified independently from the others. In addition, it implements a second, statistically-based classification layer to also model the dependence existing among sense pairs, namely, the fact that a sense in one dictionary that is already linked to a sense in the other dictionary has a lower probability of being linked to a further sense. The resulting double-layer classifier achieves global Precision and Recall scores of 0.91 and 0.80, respectively.

Cite as

Roser Saurí, Louis Mahon, Irene Russo, and Mironas Bitinis. Cross-Dictionary Linking at Sense Level with a Double-Layer Classifier. In 2nd Conference on Language, Data and Knowledge (LDK 2019). Open Access Series in Informatics (OASIcs), Volume 70, pp. 20:1-20:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{sauri_et_al:OASIcs.LDK.2019.20,
  author =	{Saur{\'\i}, Roser and Mahon, Louis and Russo, Irene and Bitinis, Mironas},
  title =	{{Cross-Dictionary Linking at Sense Level with a Double-Layer Classifier}},
  booktitle =	{2nd Conference on Language, Data and Knowledge (LDK 2019)},
  pages =	{20:1--20:16},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-105-4},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{70},
  editor =	{Eskevich, Maria and de Melo, Gerard and F\"{a}th, Christian and McCrae, John P. and Buitelaar, Paul and Chiarcos, Christian and Klimek, Bettina and Dojchinovski, Milan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.LDK.2019.20},
  URN =		{urn:nbn:de:0030-drops-103848},
  doi =		{10.4230/OASIcs.LDK.2019.20},
  annote =	{Keywords: Word sense linking, word sense mapping, lexical translation, lexical resources, language data construction, multilingual data, data integration across languages}
}
Document
Speeding up Lazy-Grounding Answer Set Solving

Authors: Richard Taupe

Published in: OASIcs, Volume 64, Technical Communications of the 34th International Conference on Logic Programming (ICLP 2018)


Abstract
The grounding bottleneck is an important open issue in Answer Set Programming. Lazy grounding addresses it by interleaving grounding and search. The performance of current lazy-grounding solvers is not yet comparable to that of ground-and-solve systems, however. The aim of this thesis is to extend prior work on lazy grounding by novel heuristics and other techniques like non-ground conflict learning in order to speed up solving. Parts of expected results will be beneficial for ground-and-solve systems as well.

Cite as

Richard Taupe. Speeding up Lazy-Grounding Answer Set Solving. In Technical Communications of the 34th International Conference on Logic Programming (ICLP 2018). Open Access Series in Informatics (OASIcs), Volume 64, pp. 20:1-20:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{taupe:OASIcs.ICLP.2018.20,
  author =	{Taupe, Richard},
  title =	{{Speeding up Lazy-Grounding Answer Set Solving}},
  booktitle =	{Technical Communications of the 34th International Conference on Logic Programming (ICLP 2018)},
  pages =	{20:1--20:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-090-3},
  ISSN =	{2190-6807},
  year =	{2018},
  volume =	{64},
  editor =	{Dal Palu', Alessandro and Tarau, Paul and Saeedloei, Neda and Fodor, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.ICLP.2018.20},
  URN =		{urn:nbn:de:0030-drops-98861},
  doi =		{10.4230/OASIcs.ICLP.2018.20},
  annote =	{Keywords: answer set programming, lazy grounding, heuristics}
}
Document
Lower Bounds on Non-Adaptive Data Structures Maintaining Sets of Numbers, from Sunflowers

Authors: Sivaramakrishnan Natarajan Ramamoorthy and Anup Rao

Published in: LIPIcs, Volume 102, 33rd Computational Complexity Conference (CCC 2018)


Abstract
We prove new cell-probe lower bounds for dynamic data structures that maintain a subset of {1,2,...,n}, and compute various statistics of the set. The data structure is said to handle insertions non-adaptively if the locations of memory accessed depend only on the element being inserted, and not on the contents of the memory. For any such data structure that can compute the median of the set, we prove that: t_{med} >= Omega(n^{1/(t_{ins}+1)}/(w^2 * t_{ins}^2)), where t_{ins} is the number of memory locations accessed during insertions, t_{med} is the number of memory locations accessed to compute the median, and w is the number of bits stored in each memory location. When the data structure is able to perform deletions non-adaptively and compute the minimum non-adaptively, we prove t_{min} + t_{del} >= Omega(log n /(log w + log log n)), where t_{min} is the number of locations accessed to compute the minimum, and t_{del} is the number of locations accessed to perform deletions. For the predecessor search problem, where the data structure is required to compute the predecessor of any element in the set, we prove that if computing the predecessors can be done non-adaptively, then either t_{pred} >= Omega(log n/(log log n + log w)), or t_{ins} >= Omega(n^{1/(2(t_{pred}+1))}), where t_{pred} is the number of locations accessed to compute predecessors. These bounds are nearly matched by Binary Search Trees in some range of parameters. Our results follow from using the Sunflower Lemma of Erdös and Rado [Paul Erdös and Richard Rado, 1960] together with several kinds of encoding arguments.

Cite as

Sivaramakrishnan Natarajan Ramamoorthy and Anup Rao. Lower Bounds on Non-Adaptive Data Structures Maintaining Sets of Numbers, from Sunflowers. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 27:1-27:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{natarajanramamoorthy_et_al:LIPIcs.CCC.2018.27,
  author =	{Natarajan Ramamoorthy, Sivaramakrishnan and Rao, Anup},
  title =	{{Lower Bounds on Non-Adaptive Data Structures Maintaining Sets of Numbers, from Sunflowers}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{27:1--27:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.27},
  URN =		{urn:nbn:de:0030-drops-88625},
  doi =		{10.4230/LIPIcs.CCC.2018.27},
  annote =	{Keywords: Non-adaptive data structures, Sunflower lemma}
}
  • Refine by Author
  • 1 Bitinis, Mironas
  • 1 Géran, Yoan
  • 1 Henzinger, Monika
  • 1 Hosseini, Mojtaba
  • 1 Khan, Shahbaz
  • Show More...

  • Refine by Classification
  • 1 Computing methodologies → Language resources
  • 1 Computing methodologies → Lexical semantics
  • 1 Computing methodologies → Logic programming and answer set programming
  • 1 Computing methodologies → Supervised learning by classification
  • 1 Mathematics of computing → Convex optimization
  • Show More...

  • Refine by Keyword
  • 1 Blossom Algorithm
  • 1 Dynamic Matching
  • 1 Frank-Wolfe algorithm
  • 1 Matching
  • 1 Matching-based market design
  • Show More...

  • Refine by Type
  • 6 document

  • Refine by Publication Year
  • 2 2018
  • 1 2019
  • 1 2020
  • 1 2021
  • 1 2022

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail