44 Search Results for "Musti�re, S�bastien"


Document
Post-Quantum Single Secret Leader Election (SSLE) from Publicly Re-Randomizable Commitments

Authors: Dan Boneh, Aditi Partap, and Lior Rotem

Published in: LIPIcs, Volume 282, 5th Conference on Advances in Financial Technologies (AFT 2023)


Abstract
A Single Secret Leader Election (SSLE) enables a group of parties to randomly choose exactly one leader from the group with the restriction that the identity of the leader will be known to the chosen leader and nobody else. At a later time, the elected leader should be able to publicly reveal her identity and prove that she is the elected leader. The election process itself should work properly even if many registered users are passive and do not send any messages. SSLE is used to strengthen the security of proof-of-stake consensus protocols by ensuring that the identity of the block proposer remains unknown until the proposer publishes a block. Boneh, Eskandarian, Hanzlik, and Greco (AFT'20) defined the concept of an SSLE and gave several constructions. Their most efficient construction is based on the difficulty of the Decision Diffie-Hellman problem in a cyclic group. In this work we construct the first efficient SSLE protocols based on the standard Learning With Errors (LWE) problem on integer lattices, as well as the Ring-LWE problem. Both are believed to be post-quantum secure. Our constructions generalize the paradigm of Boneh et al. by introducing the concept of a re-randomizable commitment (RRC). We then construct several post-quantum RRC schemes from lattice assumptions and prove the security of the derived SSLE protocols. Constructing a lattice-based RRC scheme is non-trivial, and may be of independent interest.

Cite as

Dan Boneh, Aditi Partap, and Lior Rotem. Post-Quantum Single Secret Leader Election (SSLE) from Publicly Re-Randomizable Commitments. In 5th Conference on Advances in Financial Technologies (AFT 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 282, pp. 26:1-26:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{boneh_et_al:LIPIcs.AFT.2023.26,
  author =	{Boneh, Dan and Partap, Aditi and Rotem, Lior},
  title =	{{Post-Quantum Single Secret Leader Election (SSLE) from Publicly Re-Randomizable Commitments}},
  booktitle =	{5th Conference on Advances in Financial Technologies (AFT 2023)},
  pages =	{26:1--26:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-303-4},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{282},
  editor =	{Bonneau, Joseph and Weinberg, S. Matthew},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2023.26},
  URN =		{urn:nbn:de:0030-drops-192158},
  doi =		{10.4230/LIPIcs.AFT.2023.26},
  annote =	{Keywords: Consensus, Leader Election, Post-Quantum, Lattice Cryptography, Blockchain}
}
Document
Invited Talk
Beyond Optimal Solutions for Real-World Problems (Invited Talk)

Authors: Maria Garcia de la Banda

Published in: LIPIcs, Volume 280, 29th International Conference on Principles and Practice of Constraint Programming (CP 2023)


Abstract
Combinatorial optimisation technology has come a long way. We now have mature high-level modelling languages in which to specify a model of the particular problem of interest [Nethercote et al., 2007; Frisch et al., 2008; Van Hentenryck, 1999; Fourer et al., 1990]; robust complete solvers in each major constraint paradigm, including Constraint Programming (CP), MaxSAT [Jessica Davies and Fahiem Bacchus, 2011; Alexey Ignatiev et al., 2019], and Mixed Integer Programming (MIP); effective incomplete search techniques that can easily be combined with complete solvers to speed up the search such as Large Neighbourhood Search [Paul Shaw, 1998]; and enough general knowledge about modelling techniques to understand the need for our models to incorporate components such as global constraints [Willem-Jan van Hoeve and Irit Katriel, 2006], symmetry constraints [Ian P. Gent et al., 2006], and more. All this has significantly reduced the amount of knowledge required to apply this technology successfully to the many different combinatorial optimisation problems that permeate our society. And yet, not many organisations use such advanced optimisation technology; instead, they often rely on the solutions provided by problem-specific algorithms that are implemented in traditional imperative languages and lack any of the above advances. Further, while advanced optimisation technology is particularly suitable for the kind of complex human-in-the-loop decision-making problems that occur in critical sectors of our society, including health, transport, energy, disaster management, environment and finance, these decisions are often still made by people with little or no technological support. In this extended abstract I argue that to change this state of affairs, our research focus needs to change from improving the technology on its own, to improving it so that users can better trust, use, and maintain the optimisation systems that we develop with it. The rest of this extended abstract discusses my personal experiences and opinion on these three points. Trust I highlight trust (which focuses on the user’s point of view) rather than trustworthiness (which is a characteristic of the software itself) because I think it is the former rather than the latter that is at stake for the adoption of optimisation technology. One of the biggest hurdles I have found for trust in the context of optimisation systems is for the domain experts to (feel like they) understand the underlying model. While many users will never do (or have to), I believe it is key for domain experts to have a high-level understanding of the constraints in the model, since their (dis)trust will likely spread through the organisation, impacting the adoption of the system. Thanks to the use of high-level modelling languages in CP, our group has achieved this [Matthias Klapperstueck et al., 2023] by documenting the constraints in a language the user knows (mathematics) and linking each constraint to the particular part of the model that implements it (via comments). While domain experts do not completely understand the model, the similarity between the format they understand (mathematics) and the model constraint has helped them verify our perception of their problem and improved their trust in the model. However, more needs to be done in this direction via the development of formal techniques. For example, our group is exploring the use of domain-specific languages [Hudak, 1997] as a bridge between domain experts and modellers that helps both trust and maintenance (see later). This [Sameela Suharshani Wijesundara et al., 2023] and other approaches need to be explored. A very significant source of trust for our domain experts (and of trustworthiness for the software) has been the development of two different models implemented by two different people for the same problem [Matthias Klapperstueck et al., 2023]. While this can be seen as a prohibitively expensive exercise, it did not take that long once the first model was mature, is a good way to onboard new optimisation team members, and has helped up detect not only bugs but also differences in the interpretation of domain expert information. For optimisation problems where it is not possible to verify the optimality (or even correctness) of the solution, we see such redundant modelling as the only solution for now. Interestingly, a significant step forward in obtaining the trust of our domain experts has been the generation of an optimality gap whenever an optimal solution could not be found due to time constraints. While explaining this concept took time, once understood it has boosted their trust, particularly when tackling problems where the solution is not easy verifiable or when approximated models/data are used (needed for speed, see later). This makes it difficult to work with CP and SAT solvers, as they usually lack tight lower bounds. Finally, trust is often developed through the use of the system, which I discuss below. Use Usability is known to be key for the deployment of software systems. By "system" in our context, I refer to the combination of the problem model(s), the associated solver(s) and, importantly, the User Interface (UI) that often integrates them and is fundamental to their success. In addition to the traditional usability characteristics of software systems, I believe an optimisation system requires particular care in the following areas. Interaction, i.e., the system must allow users to interact with the UI not only to provide and modify the input data, but also to modify the constraints (at the very least by turning some on/off) as well as explore and compare solutions, as argued in [David Meignan et al., 2015; Jie Liu et al., 2021]. Incremental compilers and solvers would significantly help in making this easier, as well as generic ways for the UIs to communicate with them. Conflict resolution, that is, ensuring the system can not only detect infeasible instances, but also support users in understanding the data/constraints that cause infeasibility and how to modify the instance to make it feasible. Any interactive optimisation system that has users, will likely have conflicts. Thus, it is mandatory for CP to improve its conflict resolution technology which, while existent [João Marques-Silva and Alessandro Previti, 2014; Lauffer and Topcu, 2019; Ilankaikone Senthooran et al., 2023], is not widespread and it is often still problem-dependent, overwhelming (in the number of constraints shown to the user) and slow. Without it, users will be "stumped" when (rather than if) infeasibility is reached. Solution diversity, that is, supporting users in obtaining a diverse set of (close-to-optimal) solutions, where diversity is measured by a user-provided metric modelled somehow. While some solver-independent technology has been developed and implemented for this [Emmanuel Hebrard et al., 2005; Thierry Petit and Andrew C. Trapp, 2015; Linnea Ingmar et al., 2020], it should be easier to use and more widespread. Further, it requires sophisticated solution comparison capabilities and, importantly, for optimal solutions to be found in seconds rather than hours. This brings me to speed, an area where CP solvers are falling behind. Most of our research group applications now use MIP solvers due to the need for floats (which precludes us from using learning solvers such as Chuffed [Geoffrey Chu, 2013]), but also to the lack of effective warm-start processes that are available in MIP solvers. Interestingly, data and model approximations have been proved to achieve orders of magnitude speedups with small reductions in optimality [Matthias Klapperstueck et al., 2023]. Developing generic (i.e., problem independent) accurate approximations would be extremely useful for complex decision systems. Other areas where I think generic CP methods are worth investigating more include dealing with uncertainty and online problems, ensuring solution fairness (even if it is over time), and studying predict + optimise approaches. Maintain I know very few papers devoted to the issue of maintenance in optimisation technology. While this may be due to my lack of knowledge, I suspect it is also due to the limited adoption of optimisation technology. While the issues in this area are again common to other software systems, I believe the solutions for CP require special attention. For example, the issue of changes in user requirements (that our research group calls problem drift) seems particularly prevalent in decision-making systems, as such problems can evolve rapidly due to unforeseen circumstances. This can make optimisation systems obsolete faster than expected. Our research group has proposed to tackle problem drift by developing a requirements model implemented in the above-mentioned MDSLs and created by both domain experts and modellers that, when modified re-generates parts of the model to support the modifications [Sameela Suharshani Wijesundara et al., 2023]. This and other approaches such as the creation of reusable models components [Sophia Saller and Jana Koehler, 2022; Toby Walsh, 2003], or instantiatable classes for common problem domains, are worth investigating.

Cite as

Maria Garcia de la Banda. Beyond Optimal Solutions for Real-World Problems (Invited Talk). In 29th International Conference on Principles and Practice of Constraint Programming (CP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 280, pp. 1:1-1:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{garciadelabanda:LIPIcs.CP.2023.1,
  author =	{Garcia de la Banda, Maria},
  title =	{{Beyond Optimal Solutions for Real-World Problems}},
  booktitle =	{29th International Conference on Principles and Practice of Constraint Programming (CP 2023)},
  pages =	{1:1--1:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-300-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{280},
  editor =	{Yap, Roland H. C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2023.1},
  URN =		{urn:nbn:de:0030-drops-190384},
  doi =		{10.4230/LIPIcs.CP.2023.1},
  annote =	{Keywords: Combinatorial optimisation systems, usability, trust, maintenance}
}
Document
Sinatra: Stateful Instantaneous Updates for Commercial Browsers Through Multi-Version eXecution

Authors: Ugnius Rumsevicius, Siddhanth Venkateshwaran, Ellen Kidane, and Luís Pina

Published in: LIPIcs, Volume 263, 37th European Conference on Object-Oriented Programming (ECOOP 2023)


Abstract
Browsers are the main way in which most users experience the internet, which makes them a prime target for malicious entities. The best defense for the common user is to keep their browser always up-to-date, installing updates as soon as they are available. Unfortunately, updating a browser is disruptive as it results in loss of user state. Even though modern browsers reopen all pages (tabs) after an update to minimize inconvenience, this approach still loses all local user state in each page (e.g., contents of unsubmitted forms, including associated JavaScript validation state) and assumes that pages can be refreshed and result in the same contents. We believe this is an important barrier that keeps users from updating their browsers as frequently as possible. In this paper, we present the design, implementation, and evaluation of Sinatra, which supports instantaneous browser updates that do not result in any data loss through a novel Multi-Version eXecution (MVX) approach for JavaScript programs, combined with a sophisticated proxy. Sinatra works in pure JavaScript, does not require any browser support, thus works on closed-source browsers, and requires trivial changes to each target page, that can be automated. First, Sinatra captures all the non-determinism available to a JavaScript program (e.g., event handlers executed, expired timers, invocations of Math.random). Our evaluation shows that Sinatra requires 6MB to store such events, and the memory grows at a modest rate of 253KB/s as the user keeps interacting with each page. When an update becomes available, Sinatra transfer the state by re-executing the same set of non-deterministic events on the new browser. During this time, which can be as long as 1.5 seconds, Sinatra uses MVX to allow the user to keep interacting with the old browser. Finally, Sinatra changes the roles in less than 10ms, and the user starts interacting with the new browser, effectively performing a browser update with zero downtime and no loss of state.

Cite as

Ugnius Rumsevicius, Siddhanth Venkateshwaran, Ellen Kidane, and Luís Pina. Sinatra: Stateful Instantaneous Updates for Commercial Browsers Through Multi-Version eXecution. In 37th European Conference on Object-Oriented Programming (ECOOP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 263, pp. 26:1-26:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{rumsevicius_et_al:LIPIcs.ECOOP.2023.26,
  author =	{Rumsevicius, Ugnius and Venkateshwaran, Siddhanth and Kidane, Ellen and Pina, Lu{\'\i}s},
  title =	{{Sinatra: Stateful Instantaneous Updates for Commercial Browsers Through Multi-Version eXecution}},
  booktitle =	{37th European Conference on Object-Oriented Programming (ECOOP 2023)},
  pages =	{26:1--26:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-281-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{263},
  editor =	{Ali, Karim and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2023.26},
  URN =		{urn:nbn:de:0030-drops-182190},
  doi =		{10.4230/LIPIcs.ECOOP.2023.26},
  annote =	{Keywords: Internet browsers, dynamic software updating, multi-version execution}
}
Document
Invited Talk
Realistic Self-Stabilization (Invited Talk)

Authors: Sébastien Tixeuil

Published in: LIPIcs, Volume 253, 26th International Conference on Principles of Distributed Systems (OPODIS 2022)


Abstract
It is almost fifty years since Dijkstra coined the term "self-stabilization" to denote a distributed system able to recover correct behavior starting from any arbitrary (even unreachable) configuration. His seminal paper triggered many works since then, exploring over the years new variants of the original concept, new application domains, and new complexity results. While the huge majority of those contributions relates to theory, considering computability and worst case complexity issues, this talk revisits old and recent contributions from the prism of "realistic" distributed systems, aiming to address the following question: is self-stabilization relevant in practice for distributed systems?

Cite as

Sébastien Tixeuil. Realistic Self-Stabilization (Invited Talk). In 26th International Conference on Principles of Distributed Systems (OPODIS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 253, p. 3:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{tixeuil:LIPIcs.OPODIS.2022.3,
  author =	{Tixeuil, S\'{e}bastien},
  title =	{{Realistic Self-Stabilization}},
  booktitle =	{26th International Conference on Principles of Distributed Systems (OPODIS 2022)},
  pages =	{3:1--3:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-265-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{253},
  editor =	{Hillel, Eshcar and Palmieri, Roberto and Rivi\`{e}re, Etienne},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2022.3},
  URN =		{urn:nbn:de:0030-drops-176232},
  doi =		{10.4230/LIPIcs.OPODIS.2022.3},
  annote =	{Keywords: Self-stabilization, Distributed systems, Probable stabilization, Performance evaluation, Asynchronous message passing, Multi-tolerance}
}
Document
The Step Complexity of Multidimensional Approximate Agreement

Authors: Hagit Attiya and Faith Ellen

Published in: LIPIcs, Volume 253, 26th International Conference on Principles of Distributed Systems (OPODIS 2022)


Abstract
Approximate agreement allows a set of n processes to obtain outputs that are within a specified distance ε > 0 of one another and within the convex hull of the inputs. When the inputs are real numbers, there is a wait-free shared-memory approximate agreement algorithm [Moran, 1995] whose step complexity is in O(n log(S/ε)), where S, the spread of the inputs, is the maximal distance between inputs. There is another wait-free algorithm [Schenk, 1995] that avoids the dependence on n and achieves O(log(M/ε)) step complexity where M, the magnitude of the inputs, is the absolute value of the maximal input. This paper considers whether it is possible to obtain an approximate agreement algorithm whose step complexity depends on neither n nor the magnitude of the inputs, which can be much larger than their spread. On the negative side, we prove that Ω(min{(log M)/(log log M), (√log n)/(log log n)}) is a lower bound on the step complexity of approximate agreement, even when the inputs are real numbers. On the positive side, we prove that a polylogarithmic dependence on n and S/ε can be achieved, by presenting an approximate agreement algorithm with O(log n (log n + log(S/ε))) step complexity. Our algorithm works for multidimensional domains. The step complexity can be further restricted to be in O(min{log n (log n + log (S/ε)), log(M/ε)}) when the inputs are real numbers.

Cite as

Hagit Attiya and Faith Ellen. The Step Complexity of Multidimensional Approximate Agreement. In 26th International Conference on Principles of Distributed Systems (OPODIS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 253, pp. 6:1-6:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{attiya_et_al:LIPIcs.OPODIS.2022.6,
  author =	{Attiya, Hagit and Ellen, Faith},
  title =	{{The Step Complexity of Multidimensional Approximate Agreement}},
  booktitle =	{26th International Conference on Principles of Distributed Systems (OPODIS 2022)},
  pages =	{6:1--6:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-265-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{253},
  editor =	{Hillel, Eshcar and Palmieri, Roberto and Rivi\`{e}re, Etienne},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2022.6},
  URN =		{urn:nbn:de:0030-drops-176261},
  doi =		{10.4230/LIPIcs.OPODIS.2022.6},
  annote =	{Keywords: approximate agreement, conflict detection, shared memory, wait-freedom, step complexity}
}
Document
HappyMap : A Generalized Multicalibration Method

Authors: Zhun Deng, Cynthia Dwork, and Linjun Zhang

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
Multicalibration is a powerful and evolving concept originating in the field of algorithmic fairness. For a predictor f that estimates the outcome y given covariates x, and for a function class C, multi-calibration requires that the predictor f(x) and outcome y are indistinguishable under the class of auditors in C. Fairness is captured by incorporating demographic subgroups into the class of functions C. Recent work has shown that, by enriching the class C to incorporate appropriate propensity re-weighting functions, multi-calibration also yields target-independent learning, wherein a model trained on a source domain performs well on unseen, future, target domains {(approximately) captured by the re-weightings.} Formally, multicalibration with respect to C bounds |𝔼_{(x,y)∼D}[c(f(x),x)⋅(f(x)-y)]| for all c ∈ C. In this work, we view the term (f(x)-y) as just one specific mapping, and explore the power of an enriched class of mappings. We propose s-Happy Multicalibration, a generalization of multi-calibration, which yields a wide range of new applications, including a new fairness notion for uncertainty quantification, a novel technique for conformal prediction under covariate shift, and a different approach to analyzing missing data, while also yielding a unified understanding of several existing seemingly disparate algorithmic fairness notions and target-independent learning approaches. We give a single HappyMap meta-algorithm that captures all these results, together with a sufficiency condition for its success.

Cite as

Zhun Deng, Cynthia Dwork, and Linjun Zhang. HappyMap : A Generalized Multicalibration Method. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 41:1-41:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{deng_et_al:LIPIcs.ITCS.2023.41,
  author =	{Deng, Zhun and Dwork, Cynthia and Zhang, Linjun},
  title =	{{HappyMap : A Generalized Multicalibration Method}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{41:1--41:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.41},
  URN =		{urn:nbn:de:0030-drops-175449},
  doi =		{10.4230/LIPIcs.ITCS.2023.41},
  annote =	{Keywords: algorithmic fairness, target-independent learning, transfer learning}
}
Document
Pareto-Rational Verification

Authors: Véronique Bruyère, Jean-François Raskin, and Clément Tamines

Published in: LIPIcs, Volume 243, 33rd International Conference on Concurrency Theory (CONCUR 2022)


Abstract
We study the rational verification problem which consists in verifying the correctness of a system executing in an environment that is assumed to behave rationally. We consider the model of rationality in which the environment only executes behaviors that are Pareto-optimal with regard to its set of objectives, given the behavior of the system (which is committed in advance of any interaction). We examine two ways of specifying this behavior, first by means of a deterministic Moore machine, and then by lifting its determinism. In the latter case the machine may embed several different behaviors for the system, and the universal rational verification problem aims at verifying that all of them are correct when the environment is rational. For parity objectives, we prove that the Pareto-rational verification problem is co-NP-complete and that its universal version is in PSPACE and both NP-hard and co-NP-hard. For Boolean Büchi objectives, the former problem is Π₂𝖯-complete and the latter is PSPACE-complete. We also study the case where the objectives are expressed using LTL formulas and show that the first problem is PSPACE-complete, and that the second is 2EXPTIME-complete. Both problems are also shown to be fixed-parameter tractable for parity and Boolean Büchi objectives.

Cite as

Véronique Bruyère, Jean-François Raskin, and Clément Tamines. Pareto-Rational Verification. In 33rd International Conference on Concurrency Theory (CONCUR 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 243, pp. 33:1-33:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bruyere_et_al:LIPIcs.CONCUR.2022.33,
  author =	{Bruy\`{e}re, V\'{e}ronique and Raskin, Jean-Fran\c{c}ois and Tamines, Cl\'{e}ment},
  title =	{{Pareto-Rational Verification}},
  booktitle =	{33rd International Conference on Concurrency Theory (CONCUR 2022)},
  pages =	{33:1--33:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-246-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{243},
  editor =	{Klin, Bartek and Lasota, S{\l}awomir and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2022.33},
  URN =		{urn:nbn:de:0030-drops-170968},
  doi =		{10.4230/LIPIcs.CONCUR.2022.33},
  annote =	{Keywords: Rational verification, Model-checking, Pareto-optimality, \omega-regular objectives}
}
Document
Sheaf Semantics of Termination-Insensitive Noninterference

Authors: Jonathan Sterling and Robert Harper

Published in: LIPIcs, Volume 228, 7th International Conference on Formal Structures for Computation and Deduction (FSCD 2022)


Abstract
We propose a new sheaf semantics for secure information flow over a space of abstract behaviors, based on synthetic domain theory: security classes are open/closed partitions, types are sheaves, and redaction of sensitive information corresponds to restricting a sheaf to a closed subspace. Our security-aware computational model satisfies termination-insensitive noninterference automatically, and therefore constitutes an intrinsic alternative to state of the art extrinsic/relational models of noninterference. Our semantics is the latest application of Sterling and Harper’s recent re-interpretation of phase distinctions and noninterference in programming languages in terms of Artin gluing and topos-theoretic open/closed modalities. Prior applications include parametricity for ML modules, the proof of normalization for cubical type theory by Sterling and Angiuli, and the cost-aware logical framework of Niu et al. In this paper we employ the phase distinction perspective twice: first to reconstruct the syntax and semantics of secure information flow as a lattice of phase distinctions between "higher" and "lower" security, and second to verify the computational adequacy of our sheaf semantics with respect to a version of Abadi et al.’s dependency core calculus to which we have added a construct for declassifying termination channels.

Cite as

Jonathan Sterling and Robert Harper. Sheaf Semantics of Termination-Insensitive Noninterference. In 7th International Conference on Formal Structures for Computation and Deduction (FSCD 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 228, pp. 5:1-5:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{sterling_et_al:LIPIcs.FSCD.2022.5,
  author =	{Sterling, Jonathan and Harper, Robert},
  title =	{{Sheaf Semantics of Termination-Insensitive Noninterference}},
  booktitle =	{7th International Conference on Formal Structures for Computation and Deduction (FSCD 2022)},
  pages =	{5:1--5:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-233-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{228},
  editor =	{Felty, Amy P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2022.5},
  URN =		{urn:nbn:de:0030-drops-162869},
  doi =		{10.4230/LIPIcs.FSCD.2022.5},
  annote =	{Keywords: information flow, noninterference, denotational semantics, phase distinction, Artin gluing, modal type theory, topos theory, synthetic domain theory, synthetic Tait computability}
}
Document
Monotone Complexity of Spanning Tree Polynomial Re-Visited

Authors: Arkadev Chattopadhyay, Rajit Datta, Utsab Ghosal, and Partha Mukhopadhyay

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
We prove two results that shed new light on the monotone complexity of the spanning tree polynomial, a classic polynomial in algebraic complexity and beyond. First, we show that the spanning tree polynomials having n variables and defined over constant-degree expander graphs, have monotone arithmetic complexity 2^{Ω(n)}. This yields the first strongly exponential lower bound on monotone arithmetic circuit complexity for a polynomial in VP. Before this result, strongly exponential size monotone lower bounds were known only for explicit polynomials in VNP [S. B. Gashkov and I. S. Sergeev, 2012; Ran Raz and Amir Yehudayoff, 2011; Srikanth Srinivasan, 2020; Bruno Pasqualotto Cavalar et al., 2020; Pavel Hrubeš and Amir Yehudayoff, 2021]. Recently, Hrubeš [Pavel Hrubeš, 2020] initiated a program to prove lower bounds against general arithmetic circuits by proving ε-sensitive lower bounds for monotone arithmetic circuits for a specific range of values for ε ∈ (0,1). The first ε-sensitive lower bound was just proved for a family of polynomials inside VNP by Chattopadhyay, Datta and Mukhopadhyay [Arkadev Chattopadhyay et al., 2021]. We consider the spanning tree polynomial ST_n defined over the complete graph of n vertices and show that the polynomials F_{n-1,n} - ε⋅ ST_{n} and F_{n-1,n} + ε⋅ ST_{n}, defined over (n-1)n variables, have monotone circuit complexity 2^{Ω(n)} if ε ≥ 2^{- Ω(n)} and F_{n-1,n} := ∏_{i = 2}ⁿ (x_{i,1} + ⋯ + x_{i,n}) is the complete set-multilinear polynomial. This provides the first ε-sensitive exponential lower bound for a family of polynomials inside VP. En-route, we consider a problem in 2-party, best partition communication complexity of deciding whether two sets of oriented edges distributed among Alice and Bob form a spanning tree or not. We prove that there exists a fixed distribution, under which the problem has low discrepancy with respect to every nearly-balanced partition. This result could be of interest beyond algebraic complexity. Our two results, thus, are incomparable generalizations of the well known result by Jerrum and Snir [Mark Jerrum and Marc Snir, 1982] which showed that the spanning tree polynomial, defined over complete graphs with n vertices (so the number of variables is (n-1)n), has monotone complexity 2^{Ω(n)}. In particular, the first result is an optimal lower bound and the second result can be thought of as a robust version of the earlier monotone lower bound for the spanning tree polynomial.

Cite as

Arkadev Chattopadhyay, Rajit Datta, Utsab Ghosal, and Partha Mukhopadhyay. Monotone Complexity of Spanning Tree Polynomial Re-Visited. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 39:1-39:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{chattopadhyay_et_al:LIPIcs.ITCS.2022.39,
  author =	{Chattopadhyay, Arkadev and Datta, Rajit and Ghosal, Utsab and Mukhopadhyay, Partha},
  title =	{{Monotone Complexity of Spanning Tree Polynomial Re-Visited}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{39:1--39:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.39},
  URN =		{urn:nbn:de:0030-drops-156356},
  doi =		{10.4230/LIPIcs.ITCS.2022.39},
  annote =	{Keywords: Spanning Tree Polynomial, Monotone Computation, Lower Bounds, Communication Complexity}
}
Document
Supporting the Annotation Experience Through CorEx and Word Mover’s Distance

Authors: Stefania Pecòre

Published in: OASIcs, Volume 93, 3rd Conference on Language, Data and Knowledge (LDK 2021)


Abstract
Online communities can be used to promote destructive behaviours, as in pro-Eating Disorder (ED) communities. Research needs annotated data to study these phenomena. Even though many platforms have already moderated this type of content, Twitter has not, and it can still be used for research purposes. In this paper, we unveiled emojis, words, and uncommon linguistic patterns within the ED Twitter community by using the Correlation Explanation (CorEx) algorithm on unstructured and non-annotated data to retrieve the topics. Then we annotated the dataset following these topics. We analysed then the use of CorEx and Word Mover’s Distance to retrieve automatically similar new sentences and augment the annotated dataset.

Cite as

Stefania Pecòre. Supporting the Annotation Experience Through CorEx and Word Mover’s Distance. In 3rd Conference on Language, Data and Knowledge (LDK 2021). Open Access Series in Informatics (OASIcs), Volume 93, pp. 12:1-12:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{pecore:OASIcs.LDK.2021.12,
  author =	{Pec\`{o}re, Stefania},
  title =	{{Supporting the Annotation Experience Through CorEx and Word Mover’s Distance}},
  booktitle =	{3rd Conference on Language, Data and Knowledge (LDK 2021)},
  pages =	{12:1--12:15},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-199-3},
  ISSN =	{2190-6807},
  year =	{2021},
  volume =	{93},
  editor =	{Gromann, Dagmar and S\'{e}rasset, Gilles and Declerck, Thierry and McCrae, John P. and Gracia, Jorge and Bosque-Gil, Julia and Bobillo, Fernando and Heinisch, Barbara},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.LDK.2021.12},
  URN =		{urn:nbn:de:0030-drops-145481},
  doi =		{10.4230/OASIcs.LDK.2021.12},
  annote =	{Keywords: topic retrieval, annotation, eating disorders, natural language processing}
}
Document
Invited Talk
3SUM and Related Problems in Fine-Grained Complexity (Invited Talk)

Authors: Virginia Vassilevska Williams

Published in: LIPIcs, Volume 189, 37th International Symposium on Computational Geometry (SoCG 2021)


Abstract
3SUM is a simple to state problem: given a set S of n numbers, determine whether S contains three a,b,c so that a+b+c = 0. The fastest algorithms for the problem run in n² poly(log log n)/(log n)² time both when the input numbers are integers [Ilya Baran et al., 2005] (in the word RAM model with O(log n) bit words) and when they are real numbers [Timothy M. Chan, 2020] (in the real RAM model). A hypothesis that is now central in Fine-Grained Complexity (FGC) states that 3SUM requires n^{2-o(1)} time (on the real RAM for real inputs and on the word RAM with O(log n) bit numbers for integer inputs). This hypothesis was first used in Computational Geometry by Gajentaan and Overmars [A. Gajentaan and M. Overmars, 1995] who built a web of reductions showing that many geometric problems are hard, assuming that 3SUM is hard. The web of reductions within computational geometry has grown considerably since then (see some citations in [V. Vassilevska Williams, 2018]). A seminal paper by Pǎtraşcu [Mihai Pǎtraşcu, 2010] showed that the integer version of the 3SUM hypothesis can be used to prove polynomial conditional lower bounds for several problems in data structures and graph algorithms as well, extending the implications of the hypothesis to outside computational geometry. Pǎtraşcu proved an important tight equivalence between (integer) 3SUM and a problem called 3SUM-Convolution (see also [Timothy M. Chan and Qizheng He, 2020]) that is easier to use in reductions: given an integer array a of length n, do there exist i,j ∈ [n] so that a[i]+a[j] = a[i+j]. From 3SUM-Convolution, many 3SUM-based hardness results have been proven: e.g. to listing graphs in triangles, dynamically maintaining shortest paths or bipartite matching, subset intersection and many more. It is interesting to consider more runtime-equivalent formulations of 3SUM, with the goal of uncovering more relationships to different problems. The talk will outline some such equivalences. For instance, 3SUM (over the reals or the integers) is equivalent to All-Numbers-3SUM: given a set S of n numbers, determine for every a ∈ S whether there are b,c ∈ S with a+b+c = 0 (e.g. [V. Vassilevska Williams and R. Williams, 2018]). The equivalences between 3SUM, 3SUM-Convolution and All-Numbers 3SUM are (n²,n²)-fine-grained equivalences that imply that if there is an O(n^{2-ε}) time algorithm for one of the problems for ε > 0, then there is also an O(n^{2-ε'}) time algorithm for the other problems for some ε' > 0. More generally, for functions a(n),b(n), there is an (a,b)-fine-grained reduction [V. Vassilevska Williams, 2018; V. Vassilevska Williams and R. Williams, 2010; V. Vassilevska Williams and R. Williams, 2018] from problem A to problem B if for every ε > 0 there is a δ > 0 and an O(a(n)^{1-δ}) time algorithm for A that does oracle calls to instances of B of sizes n₁,…,n_k (for some k) so that ∑_{j = 1}^k b(n_j)^{1-ε} ≤ a(n)^{1-δ}. With such a reduction, an O(b(n)^{1-ε}) time algorithm for B can be converted into an O(a(n)^{1-δ}) time algorithm for A by replacing the oracle calls by calls to the B algorithm. A and B are (a,b)-fine-grained equivalent if A (a,b)-reduces to B and B (b,a)-reduces to A. One of the main open problems in FGC is to determine the relationship between 3SUM and the other central FGC problems, in particular All-Pairs Shortest Paths (APSP). A classical graph problem, APSP in n node graphs has been known to be solvable in O(n³) time since the 1950s. Its fastest known algorithm runs in n³/exp(√{log n}) time [Ryan Williams, 2014]. The APSP Hypothesis states that n^{3-o(1)} time is needed to solve APSP in graphs with integer edge weights in the word-RAM model with O(log n) bit words. It is unknown whether APSP and 3SUM are fine-grained reducible to each other, in either direction. The two problems are very similar. Problems such as (min,+)-convolution (believed to require n^{2-o(1)} time) have tight fine-grained reductions to both APSP and 3SUM, and both 3SUM and APSP have tight fine-grained reductions to problems such as Exact Triangle [V. Vassilevska Williams and R. Williams, 2018; V. Vassilevska and R. Williams, 2009; V. Vassilevska Williams and Ryan Williams, 2013] and (since very recently) Listing triangles in sparse graphs [Mihai Pǎtraşcu, 2010; Tsvi Kopelowitz et al., 2016; V. Vassilevska Williams and Yinzhan Xu, 2020]. The talk will discuss these relationships and some of their implications, e.g. to dynamic algorithms.

Cite as

Virginia Vassilevska Williams. 3SUM and Related Problems in Fine-Grained Complexity (Invited Talk). In 37th International Symposium on Computational Geometry (SoCG 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 189, pp. 2:1-2:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{vassilevskawilliams:LIPIcs.SoCG.2021.2,
  author =	{Vassilevska Williams, Virginia},
  title =	{{3SUM and Related Problems in Fine-Grained Complexity}},
  booktitle =	{37th International Symposium on Computational Geometry (SoCG 2021)},
  pages =	{2:1--2:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-184-9},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{189},
  editor =	{Buchin, Kevin and Colin de Verdi\`{e}re, \'{E}ric},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2021.2},
  URN =		{urn:nbn:de:0030-drops-138014},
  doi =		{10.4230/LIPIcs.SoCG.2021.2},
  annote =	{Keywords: fine-grained complexity}
}
Document
Approximate Nearest-Neighbor Search for Line Segments

Authors: Ahmed Abdelkader and David M. Mount

Published in: LIPIcs, Volume 189, 37th International Symposium on Computational Geometry (SoCG 2021)


Abstract
Approximate nearest-neighbor search is a fundamental algorithmic problem that continues to inspire study due its essential role in numerous contexts. In contrast to most prior work, which has focused on point sets, we consider nearest-neighbor queries against a set of line segments in ℝ^d, for constant dimension d. Given a set S of n disjoint line segments in ℝ^d and an error parameter ε > 0, the objective is to build a data structure such that for any query point q, it is possible to return a line segment whose Euclidean distance from q is at most (1+ε) times the distance from q to its nearest line segment. We present a data structure for this problem with storage O((n²/ε^d) log (Δ/ε)) and query time O(log (max(n,Δ)/ε)), where Δ is the spread of the set of segments S. Our approach is based on a covering of space by anisotropic elements, which align themselves according to the orientations of nearby segments.

Cite as

Ahmed Abdelkader and David M. Mount. Approximate Nearest-Neighbor Search for Line Segments. In 37th International Symposium on Computational Geometry (SoCG 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 189, pp. 4:1-4:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{abdelkader_et_al:LIPIcs.SoCG.2021.4,
  author =	{Abdelkader, Ahmed and Mount, David M.},
  title =	{{Approximate Nearest-Neighbor Search for Line Segments}},
  booktitle =	{37th International Symposium on Computational Geometry (SoCG 2021)},
  pages =	{4:1--4:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-184-9},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{189},
  editor =	{Buchin, Kevin and Colin de Verdi\`{e}re, \'{E}ric},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2021.4},
  URN =		{urn:nbn:de:0030-drops-138039},
  doi =		{10.4230/LIPIcs.SoCG.2021.4},
  annote =	{Keywords: Approximate nearest-neighbor searching, Approximate Voronoi diagrams, Line segments, Macbeath regions}
}
Document
Lower Bounds for Semialgebraic Range Searching and Stabbing Problems

Authors: Peyman Afshani and Pingan Cheng

Published in: LIPIcs, Volume 189, 37th International Symposium on Computational Geometry (SoCG 2021)


Abstract
In the semialgebraic range searching problem, we are given a set of n points in ℝ^d and we want to preprocess the points such that for any query range belonging to a family of constant complexity semialgebraic sets (Tarski cells), all the points intersecting the range can be reported or counted efficiently. When the ranges are composed of simplices, then the problem is well-understood: it can be solved using S(n) space and with Q(n) query time with S(n)Q^d(n) = Õ(n^d) where the Õ(⋅) notation hides polylogarithmic factors and this trade-off is tight (up to n^o(1) factors). Consequently, there exists "low space" structures that use O(n) space with O(n^{1-1/d}) query time and "fast query" structures that use O(n^d) space with O(log^{d+1} n) query time. However, for the general semialgebraic ranges, only "low space" solutions are known, but the best solutions match the same trade-off curve as the simplex queries, with O(n) space and Õ(n^{1-1/d}) query time. It has been conjectured that the same could be done for the "fast query" case but this open problem has stayed unresolved. Here, we disprove this conjecture. We give the first nontrivial lower bounds for semilagebraic range searching and other related problems. More precisely, we show that any data structure for reporting the points between two concentric circles, a problem that we call 2D annulus reporting problem, with Q(n) query time must use S(n) = Ω^o(n³/Q(n)⁵) space where the Ω^o(⋅) notation hides n^o(1) factors, meaning, for Q(n) = O(log^{O(1)}n), Ω^o(n³) space must be used. In addition, we study the problem of reporting the subset of input points between two polynomials of the form Y = ∑_{i=0}^Δ a_i Xⁱ where values a_0,⋯,a_Δ are given at the query time, a problem that we call polynomial slab reporting. For this, we show a space lower bound of Ω^o(n^{Δ+1}/Q(n)^{Δ²+Δ}), which shows for Q(n) = O(log^{O(1)}n), we must use Ω^o(n^{Δ+1}) space. We also consider the dual problems of semialgebraic range searching, semialgebraic stabbing problems, and present lower bounds for them. In particular, we show that in linear space, any data structure that solves 2D annulus stabbing problems must use Ω(n^{2/3}) query time. Note that this almost matches the upper bound obtained by lifting 2D annuli to 3D. Like semialgebraic range searching, we also present lower bounds for general semialgebraic slab stabbing problems. Again, our lower bounds are almost tight for linear size data structures in this case.

Cite as

Peyman Afshani and Pingan Cheng. Lower Bounds for Semialgebraic Range Searching and Stabbing Problems. In 37th International Symposium on Computational Geometry (SoCG 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 189, pp. 8:1-8:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{afshani_et_al:LIPIcs.SoCG.2021.8,
  author =	{Afshani, Peyman and Cheng, Pingan},
  title =	{{Lower Bounds for Semialgebraic Range Searching and Stabbing Problems}},
  booktitle =	{37th International Symposium on Computational Geometry (SoCG 2021)},
  pages =	{8:1--8:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-184-9},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{189},
  editor =	{Buchin, Kevin and Colin de Verdi\`{e}re, \'{E}ric},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2021.8},
  URN =		{urn:nbn:de:0030-drops-138072},
  doi =		{10.4230/LIPIcs.SoCG.2021.8},
  annote =	{Keywords: Computational Geometry, Data Structures and Algorithms}
}
Document
Escaping the Curse of Spatial Partitioning: Matchings with Low Crossing Numbers and Their Applications

Authors: Mónika Csikós and Nabil H. Mustafa

Published in: LIPIcs, Volume 189, 37th International Symposium on Computational Geometry (SoCG 2021)


Abstract
Given a set system (X, S), constructing a matching of X with low crossing number is a key tool in combinatorics and algorithms. In this paper we present a new sampling-based algorithm which is applicable to finite set systems. Let n = |X|, m = | S| and assume that X has a perfect matching M such that any set in 𝒮 crosses at most κ = Θ(n^γ) edges of M. In the case γ = 1- 1/d, our algorithm computes a perfect matching of X with expected crossing number at most 10 κ, in expected time Õ (n^{2+(2/d)} + mn^(2/d)). As an immediate consequence, we get improved bounds for constructing low-crossing matchings for a slew of both abstract and geometric problems, including many basic geometric set systems (e.g., balls in ℝ^d). This further implies improved algorithms for many well-studied problems such as construction of ε-approximations. Our work is related to two earlier themes: the work of Varadarajan (STOC '10) / Chan et al. (SODA '12) that avoids spatial partitionings for constructing ε-nets, and of Chan (DCG '12) that gives an optimal algorithm for matchings with respect to hyperplanes in ℝ^d. Another major advantage of our method is its simplicity. An implementation of a variant of our algorithm in C++ is available on Github; it is approximately 200 lines of basic code without any non-trivial data-structure. Since the start of the study of matchings with low-crossing numbers with respect to half-spaces in the 1980s, this is the first implementation made possible for dimensions larger than 2.

Cite as

Mónika Csikós and Nabil H. Mustafa. Escaping the Curse of Spatial Partitioning: Matchings with Low Crossing Numbers and Their Applications. In 37th International Symposium on Computational Geometry (SoCG 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 189, pp. 28:1-28:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{csikos_et_al:LIPIcs.SoCG.2021.28,
  author =	{Csik\'{o}s, M\'{o}nika and Mustafa, Nabil H.},
  title =	{{Escaping the Curse of Spatial Partitioning: Matchings with Low Crossing Numbers and Their Applications}},
  booktitle =	{37th International Symposium on Computational Geometry (SoCG 2021)},
  pages =	{28:1--28:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-184-9},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{189},
  editor =	{Buchin, Kevin and Colin de Verdi\`{e}re, \'{E}ric},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2021.28},
  URN =		{urn:nbn:de:0030-drops-138273},
  doi =		{10.4230/LIPIcs.SoCG.2021.28},
  annote =	{Keywords: Matchings, crossing numbers, approximations}
}
Document
Packing Squares into a Disk with Optimal Worst-Case Density

Authors: Sándor P. Fekete, Vijaykrishna Gurunathan, Kushagra Juneja, Phillip Keldenich, Linda Kleist, and Christian Scheffer

Published in: LIPIcs, Volume 189, 37th International Symposium on Computational Geometry (SoCG 2021)


Abstract
We provide a tight result for a fundamental problem arising from packing squares into a circular container: The critical density of packing squares into a disk is δ = 8/(5π)≈ 0.509. This implies that any set of (not necessarily equal) squares of total area A ≤ 8/5 can always be packed into a disk with radius 1; in contrast, for any ε > 0 there are sets of squares of total area 8/5+ε that cannot be packed, even if squares may be rotated. This settles the last (and arguably, most elusive) case of packing circular or square objects into a circular or square container: The critical densities for squares in a square (1/2), circles in a square (π/(3+2√2) ≈ 0.539) and circles in a circle (1/2) have already been established, making use of recursive subdivisions of a square container into pieces bounded by straight lines, or the ability to use recursive arguments based on similarity of objects and container; neither of these approaches can be applied when packing squares into a circular container. Our proof uses a careful manual analysis, complemented by a computer-assisted part that is based on interval arithmetic. Beyond the basic mathematical importance, our result is also useful as a blackbox lemma for the analysis of recursive packing algorithms. At the same time, our approach showcases the power of a general framework for computer-assisted proofs, based on interval arithmetic.

Cite as

Sándor P. Fekete, Vijaykrishna Gurunathan, Kushagra Juneja, Phillip Keldenich, Linda Kleist, and Christian Scheffer. Packing Squares into a Disk with Optimal Worst-Case Density. In 37th International Symposium on Computational Geometry (SoCG 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 189, pp. 36:1-36:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{fekete_et_al:LIPIcs.SoCG.2021.36,
  author =	{Fekete, S\'{a}ndor P. and Gurunathan, Vijaykrishna and Juneja, Kushagra and Keldenich, Phillip and Kleist, Linda and Scheffer, Christian},
  title =	{{Packing Squares into a Disk with Optimal Worst-Case Density}},
  booktitle =	{37th International Symposium on Computational Geometry (SoCG 2021)},
  pages =	{36:1--36:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-184-9},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{189},
  editor =	{Buchin, Kevin and Colin de Verdi\`{e}re, \'{E}ric},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2021.36},
  URN =		{urn:nbn:de:0030-drops-138356},
  doi =		{10.4230/LIPIcs.SoCG.2021.36},
  annote =	{Keywords: Square packing, packing density, tight worst-case bound, interval arithmetic, approximation}
}
  • Refine by Author
  • 3 Mustière, Sébastien
  • 3 van Oosterom, Peter
  • 2 Bruyère, Véronique
  • 2 Fekete, Sándor P.
  • 2 Harrie, Lars
  • Show More...

  • Refine by Classification
  • 7 Theory of computation → Computational geometry
  • 2 Mathematics of computing → Discrete mathematics
  • 2 Theory of computation → Design and analysis of algorithms
  • 2 Theory of computation → Distributed algorithms
  • 1 Applied computing → Annotation
  • Show More...

  • Refine by Keyword
  • 3 generalization
  • 2 Distributed systems
  • 2 Spatial information
  • 2 aggregation
  • 2 cartography
  • Show More...

  • Refine by Type
  • 44 document

  • Refine by Publication Year
  • 14 2021
  • 8 2009
  • 6 2023
  • 3 2020
  • 3 2022
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail