120 Search Results for "Smith, Adam"


Volume

LIPIcs, Volume 275

Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)

APPROX/RANDOM 2023, September 11-13, 2023, Atlanta, Georgia, USA

Editors: Nicole Megow and Adam Smith

Volume

LIPIcs, Volume 163

1st Conference on Information-Theoretic Cryptography (ITC 2020)

ITC 2020, June 17-19, 2020, Boston, MA, USA

Editors: Yael Tauman Kalai, Adam D. Smith, and Daniel Wichs

Document
APPROX
Universal Optimization for Non-Clairvoyant Subadditive Joint Replenishment

Authors: Tomer Ezra, Stefano Leonardi, Michał Pawłowski, Matteo Russo, and Seeun William Umboh

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
The online joint replenishment problem (JRP) is a fundamental problem in the area of online problems with delay. Over the last decade, several works have studied generalizations of JRP with different cost functions for servicing requests. Most prior works on JRP and its generalizations have focused on the clairvoyant setting. Recently, Touitou [Noam Touitou, 2023] developed a non-clairvoyant framework that provided an O(√{n log n}) upper bound for a wide class of generalized JRP, where n is the number of request types. We advance the study of non-clairvoyant algorithms by providing a simpler, modular framework that matches the competitive ratio established by Touitou for the same class of generalized JRP. Our key insight is to leverage universal algorithms for Set Cover to approximate arbitrary monotone subadditive functions using a simple class of functions termed disjoint. This allows us to reduce the problem to several independent instances of the TCP Acknowledgement problem, for which a simple 2-competitive non-clairvoyant algorithm is known. The modularity of our framework is a major advantage as it allows us to tailor the reduction to specific problems and obtain better competitive ratios. In particular, we obtain tight O(√n)-competitive algorithms for two significant problems: Multi-Level Aggregation and Weighted Symmetric Subadditive Joint Replenishment. We also show that, in contrast, Touitou’s algorithm is Ω(√{n log n})-competitive for both of these problems.

Cite as

Tomer Ezra, Stefano Leonardi, Michał Pawłowski, Matteo Russo, and Seeun William Umboh. Universal Optimization for Non-Clairvoyant Subadditive Joint Replenishment. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 12:1-12:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ezra_et_al:LIPIcs.APPROX/RANDOM.2024.12,
  author =	{Ezra, Tomer and Leonardi, Stefano and Paw{\l}owski, Micha{\l} and Russo, Matteo and Umboh, Seeun William},
  title =	{{Universal Optimization for Non-Clairvoyant Subadditive Joint Replenishment}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{12:1--12:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.12},
  URN =		{urn:nbn:de:0030-drops-210050},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.12},
  annote =	{Keywords: Set Cover, Joint Replenishment, TCP-Acknowledgment, Subadditive Function Approximation, Multi-Level Aggregation}
}
Document
APPROX
Rectangle Tiling Binary Arrays

Authors: Pratik Ghosal, Syed Mohammad Meesum, and Katarzyna Paluch

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
The problem of rectangle tiling binary arrays is defined as follows. Given an n × n array A of zeros and ones and a natural number p, our task is to partition A into at most p rectangular tiles, so that the maximal weight of a tile is minimized. A tile is any rectangular subarray of A. The weight of a tile is the sum of elements that fall within it. We present a linear (O(n²)) time (3/2 + p²/w(A))-approximation algorithm for this problem, where w(A) denotes the weight of the whole array A. This improves on the previously known approximation with the ratio 2 when p²/w(A) < 1/2. The result is best possible in the following sense. The algorithm employs the lower bound of L = ⌈w(A)/p⌉, which is the only known and used bound on the optimum in all algorithms for rectangle tiling. We prove that a better approximation factor for the binary RTile cannot be achieved using L, because there exist arrays, whose every partition contains a tile with weight at least (3/2 + p/w(A))L. We also consider the dual problem of rectangle tiling for binary arrays, where we are given an upper bound on the weight of the tiles, and we have to cover the array A with the minimum number of non-overlapping tiles. Both problems have natural extensions to d-dimensional versions, for which we provide analogous results.

Cite as

Pratik Ghosal, Syed Mohammad Meesum, and Katarzyna Paluch. Rectangle Tiling Binary Arrays. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 28:1-28:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ghosal_et_al:LIPIcs.APPROX/RANDOM.2024.28,
  author =	{Ghosal, Pratik and Meesum, Syed Mohammad and Paluch, Katarzyna},
  title =	{{Rectangle Tiling Binary Arrays}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{28:1--28:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.28},
  URN =		{urn:nbn:de:0030-drops-210214},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.28},
  annote =	{Keywords: Rectangle Tiling, RTILE, DRTILE}
}
Document
RANDOM
Private Counting of Distinct Elements in the Turnstile Model and Extensions

Authors: Monika Henzinger, A. R. Sricharan, and Teresa Anna Steiner

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
Privately counting distinct elements in a stream is a fundamental data analysis problem with many applications in machine learning. In the turnstile model, Jain et al. [NeurIPS2023] initiated the study of this problem parameterized by the maximum flippancy of any element, i.e., the number of times that the count of an element changes from 0 to above 0 or vice versa. They give an item-level (ε,δ)-differentially private algorithm whose additive error is tight with respect to that parameterization. In this work, we show that a very simple algorithm based on the sparse vector technique achieves a tight additive error for item-level (ε,δ)-differential privacy and item-level ε-differential privacy with regards to a different parameterization, namely the sum of all flippancies. Our second result is a bound which shows that for a large class of algorithms, including all existing differentially private algorithms for this problem, the lower bound from item-level differential privacy extends to event-level differential privacy. This partially answers an open question by Jain et al. [NeurIPS2023].

Cite as

Monika Henzinger, A. R. Sricharan, and Teresa Anna Steiner. Private Counting of Distinct Elements in the Turnstile Model and Extensions. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 40:1-40:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{henzinger_et_al:LIPIcs.APPROX/RANDOM.2024.40,
  author =	{Henzinger, Monika and Sricharan, A. R. and Steiner, Teresa Anna},
  title =	{{Private Counting of Distinct Elements in the Turnstile Model and Extensions}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{40:1--40:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.40},
  URN =		{urn:nbn:de:0030-drops-210335},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.40},
  annote =	{Keywords: differential privacy, turnstile model, counting distinct elements}
}
Document
RANDOM
Capacity-Achieving Gray Codes

Authors: Venkatesan Guruswami and Hsin-Po Wang

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
To ensure differential privacy, one can reveal an integer fuzzily in two ways: (a) add some Laplace noise to the integer, or (b) encode the integer as a binary string and add iid BSC noise. The former is simple and natural while the latter is flexible and affordable, especially when one wants to reveal a sparse vector of integers. In this paper, we propose an implementation of (b) that achieves the capacity of the BSC with positive error exponents. Our implementation adds error-correcting functionality to Gray codes by mimicking how software updates back up the files that are getting updated ("coded Gray code"). In contrast, the old implementation of (b) interpolates between codewords of a black-box error-correcting code ("Grayed code").

Cite as

Venkatesan Guruswami and Hsin-Po Wang. Capacity-Achieving Gray Codes. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 65:1-65:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX/RANDOM.2024.65,
  author =	{Guruswami, Venkatesan and Wang, Hsin-Po},
  title =	{{Capacity-Achieving Gray Codes}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{65:1--65:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.65},
  URN =		{urn:nbn:de:0030-drops-210582},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.65},
  annote =	{Keywords: Gray codes, capacity-achieving codes, differential privacy}
}
Document
RANDOM
Public Coin Interactive Proofs for Label-Invariant Distribution Properties

Authors: Tal Herman

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
Assume we are given sample access to an unknown distribution D over a large domain [N]. An emerging line of work has demonstrated that many basic quantities relating to the distribution, such as its distance from uniform and its Shannon entropy, despite being hard to approximate through the samples only, can be efficiently and verifiably approximated through interaction with an untrusted powerful prover, that knows the entire distribution [Herman and Rothblum, STOC 2022, FOCS 2023]. Concretely, these works provide an efficient proof system for approximation of any label-invariant distribution quantity (i.e. any function over the distribution that’s invariant to a re-labeling of the domain [N]). In our main result, we present the first efficient public coin AM protocol, for any label-invariant property. Our protocol achieves sample complexity and communication complexity of magnitude Õ(N^{2/3}), while the proof can be generated in quasi-linear Õ(N) time. On top of that, we also give a public-coin protocol for efficiently verifying the distance a between a samplable distribution D, and some explicitly given distribution Q.

Cite as

Tal Herman. Public Coin Interactive Proofs for Label-Invariant Distribution Properties. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 72:1-72:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{herman:LIPIcs.APPROX/RANDOM.2024.72,
  author =	{Herman, Tal},
  title =	{{Public Coin Interactive Proofs for Label-Invariant Distribution Properties}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{72:1--72:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.72},
  URN =		{urn:nbn:de:0030-drops-210654},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.72},
  annote =	{Keywords: Interactive Proof Systems, Distribution Testing, Public-Coin Protocols}
}
Document
RANDOM
Additive Noise Mechanisms for Making Randomized Approximation Algorithms Differentially Private

Authors: Jakub Tětek

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
The exponential increase in the amount of available data makes taking advantage of them without violating users' privacy one of the fundamental problems of computer science. This question has been investigated thoroughly under the framework of differential privacy. However, most of the literature has not focused on settings where the amount of data is so large that we are not even able to compute the exact answer in the non-private setting (such as in the streaming setting, sublinear-time setting, etc.). This can often make the use of differential privacy unfeasible in practice. In this paper, we show a general approach for making Monte-Carlo randomized approximation algorithms differentially private. We only need to assume the error R of the approximation algorithm is sufficiently concentrated around 0 (e.g. 𝔼[|R|] is bounded) and that the function being approximated has a small global sensitivity Δ. Specifically, if we have a randomized approximation algorithm with sufficiently concentrated error which has time/space/query complexity T(n,ρ) with ρ being an accuracy parameter, we can generally speaking get an algorithm with the same accuracy and complexity T(n,Θ(ε ρ)) that is ε-differentially private. Our technical results are as follows. First, we show that if the error is subexponential, then the Laplace mechanism with error magnitude proportional to the sum of the global sensitivity Δ and the subexponential diameter of the error of the algorithm makes the algorithm differentially private. This is true even if the worst-case global sensitivity of the algorithm is large or infinite. We then introduce a new additive noise mechanism, which we call the zero-symmetric Pareto mechanism. We show that using this mechanism, we can make an algorithm differentially private even if we only assume a bound on the first absolute moment of the error 𝔼[|R|]. Finally, we use our results to give either the first known or improved sublinear-complexity differentially private algorithms for various problems. This includes results for frequency moments, estimating the average degree of a graph in subliinear time, rank queries, or estimating the size of the maximum matching. Our results raise many new questions and we state multiple open problems.

Cite as

Jakub Tětek. Additive Noise Mechanisms for Making Randomized Approximation Algorithms Differentially Private. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 73:1-73:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{tetek:LIPIcs.APPROX/RANDOM.2024.73,
  author =	{T\v{e}tek, Jakub},
  title =	{{Additive Noise Mechanisms for Making Randomized Approximation Algorithms Differentially Private}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{73:1--73:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.73},
  URN =		{urn:nbn:de:0030-drops-210660},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.73},
  annote =	{Keywords: Differential privacy, Randomized approximation algorithms}
}
Document
Rose: Composable Autodiff for the Interactive Web

Authors: Sam Estep, Wode Ni, Raven Rothkopf, and Joshua Sunshine

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Reverse-mode automatic differentiation (autodiff) has been popularized by deep learning, but its ability to compute gradients is also valuable for interactive use cases such as bidirectional computer-aided design, embedded physics simulations, visualizing causal inference, and more. Unfortunately, the web is ill-served by existing autodiff frameworks, which use autodiff strategies that perform poorly on dynamic scalar programs, and pull in heavy dependencies that would result in unacceptable webpage sizes. This work introduces Rose, a lightweight autodiff framework for the web using a new hybrid approach to reverse-mode autodiff, blending conventional tracing and transformation techniques in a way that uses the host language for metaprogramming while also allowing the programmer to explicitly define reusable functions that comprise a larger differentiable computation. We demonstrate the value of the Rose design by porting two differentiable physics simulations, and evaluate its performance on an optimization-based diagramming application, showing Rose outperforming the state-of-the-art in web-based autodiff by multiple orders of magnitude.

Cite as

Sam Estep, Wode Ni, Raven Rothkopf, and Joshua Sunshine. Rose: Composable Autodiff for the Interactive Web. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 15:1-15:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{estep_et_al:LIPIcs.ECOOP.2024.15,
  author =	{Estep, Sam and Ni, Wode and Rothkopf, Raven and Sunshine, Joshua},
  title =	{{Rose: Composable Autodiff for the Interactive Web}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{15:1--15:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.15},
  URN =		{urn:nbn:de:0030-drops-208642},
  doi =		{10.4230/LIPIcs.ECOOP.2024.15},
  annote =	{Keywords: Automatic differentiation, differentiable programming, compilers, web}
}
Document
Constrictor: Immutability as a Design Concept

Authors: Elad Kinsbruner, Shachar Itzhaky, and Hila Peleg

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Many object-oriented applications in algorithm design rely on objects never changing during their lifetime. This is often tackled by marking object references as read-only, e.g., using the const keyword in C++. In other languages like Python or Java where such a concept does not exist, programmers rely on best practices that are entirely unenforced. While reliance on best practices is obviously too permissive, const-checking is too restrictive: it is possible for a method to mutate the internal state while still satisfying the property we expect from an "immutable" object in this setting. We would therefore like to enforce the immutability of an object’s abstract state. We check an object’s immutability through a view of its abstract state: for instances of an immutable class, the view does not change when running any of the class’s methods, even if some of the internal state does change. If all methods of a class are verified as non-mutating, we can deem the entire class view-immutable. We present an SMT-based algorithm to check view-immutability, and implement it in our linter/verifier, Constrictor. We evaluate Constrictor on 51 examples of immutability-related design violations. Our evaluation shows that Constrictor is effective at catching a variety of prototypical design violations, and does so in seconds. We also explore Constrictor with two real-world case studies.

Cite as

Elad Kinsbruner, Shachar Itzhaky, and Hila Peleg. Constrictor: Immutability as a Design Concept. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 22:1-22:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kinsbruner_et_al:LIPIcs.ECOOP.2024.22,
  author =	{Kinsbruner, Elad and Itzhaky, Shachar and Peleg, Hila},
  title =	{{Constrictor: Immutability as a Design Concept}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{22:1--22:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.22},
  URN =		{urn:nbn:de:0030-drops-208715},
  doi =		{10.4230/LIPIcs.ECOOP.2024.22},
  annote =	{Keywords: Immutability, Design Enforcement, SMT, Liskov Substitution Principle, Object-oriented Programming}
}
Document
Compiling with Arrays

Authors: David Richter, Timon Böhler, Pascal Weisenburger, and Mira Mezini

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Linear algebra computations are foundational for neural networks and machine learning, often handled through arrays. While many functional programming languages feature lists and recursion, arrays in linear algebra demand constant-time access and bulk operations. To bridge this gap, some languages represent arrays as (eager) functions instead of lists. In this paper, we connect this idea to a formal logical foundation by interpreting functions as the usual negative types from polarized type theory, and arrays as the corresponding dual positive version of the function type. Positive types are defined to have a single elimination form whose computational interpretation is pattern matching. Just like (positive) product types bind two variables during pattern matching, (positive) array types bind variables with multiplicity during pattern matching. We follow a similar approach for Booleans by introducing conditionally-defined variables. The positive formulation for the array type enables us to combine typed partial evaluation and common subexpression elimination into an elegant algorithm whose result enjoys a property we call maximal fission, which we argue can be beneficial for further optimizations. For this purpose, we present the novel intermediate representation indexed administrative normal form (A_{i}NF), which relies on the formal logical foundation of the positive formulation for the array type to facilitate maximal loop fission and subsequent optimizations. A_{i}NF is normal with regard to commuting conversion for both let-bindings and for-loops, leading to flat and maximally fissioned terms. We mechanize the translation and normalization from a simple surface language to A_{i}NF, establishing that the process terminates, preserves types, and produces maximally fissioned terms.

Cite as

David Richter, Timon Böhler, Pascal Weisenburger, and Mira Mezini. Compiling with Arrays. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 33:1-33:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{richter_et_al:LIPIcs.ECOOP.2024.33,
  author =	{Richter, David and B\"{o}hler, Timon and Weisenburger, Pascal and Mezini, Mira},
  title =	{{Compiling with Arrays}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{33:1--33:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.33},
  URN =		{urn:nbn:de:0030-drops-208823},
  doi =		{10.4230/LIPIcs.ECOOP.2024.33},
  annote =	{Keywords: array languages, functional programming, domain-specific languages, normalization by evaluation, common subexpression elimination, polarity, positive function type, intrinsic types}
}
Document
Using Constraint Programming for Disjunctive Scheduling in Temporal AI Planning

Authors: Adam Francis Green, J. Christopher Beck, and Amanda Coles

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
We present a novel scheduling model that leverages Constraint Programming (CP) to enhance problem solving performance in Temporal Planning. Building on the established strategy of decomposing causal and temporal reasoning, our approach abstracts two common fact structures present in many Temporal Planning problems - Semaphores and Envelopes - and performs temporal reasoning in a CP-based scheduler. At each search node in a heuristic search for a temporal plan, we construct and solve a Constraint Satisfaction Problem (CSP) and integrate feedback from the CP-based scheduler to guide the causal planning search towards a solution. Through experimental analysis, we validate the impact of these advances, demonstrating a significant reduction in both the number of states searched and in search time alongside an increase in problem-solving coverage.

Cite as

Adam Francis Green, J. Christopher Beck, and Amanda Coles. Using Constraint Programming for Disjunctive Scheduling in Temporal AI Planning. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 12:1-12:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{francisgreen_et_al:LIPIcs.CP.2024.12,
  author =	{Francis Green, Adam and Beck, J. Christopher and Coles, Amanda},
  title =	{{Using Constraint Programming for Disjunctive Scheduling in Temporal AI Planning}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{12:1--12:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.12},
  URN =		{urn:nbn:de:0030-drops-206974},
  doi =		{10.4230/LIPIcs.CP.2024.12},
  annote =	{Keywords: AI Planning, Temporal-Numeric Planning, Constraint Programming, Scheduling}
}
Document
Constraint Modelling with LLMs Using In-Context Learning

Authors: Kostis Michailidis, Dimos Tsouros, and Tias Guns

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
Constraint Programming (CP) allows for the modelling and solving of a wide range of combinatorial problems. However, modelling such problems using constraints over decision variables still requires significant expertise, both in conceptual thinking and syntactic use of modelling languages. In this work, we explore the potential of using pre-trained Large Language Models (LLMs) as coding assistants, to transform textual problem descriptions into concrete and executable CP specifications. We present different transformation pipelines with explicit intermediate representations, and we investigate the potential benefit of various retrieval-augmented example selection strategies for in-context learning. We evaluate our approach on 2 datasets from the literature, namely NL4Opt (optimisation) and Logic Grid Puzzles (satisfaction), and a heterogeneous set of exercises from a CP course. The results show that pre-trained LLMs have promising potential for initialising the modelling process, with retrieval-augmented in-context learning significantly enhancing their modelling capabilities.

Cite as

Kostis Michailidis, Dimos Tsouros, and Tias Guns. Constraint Modelling with LLMs Using In-Context Learning. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 20:1-20:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{michailidis_et_al:LIPIcs.CP.2024.20,
  author =	{Michailidis, Kostis and Tsouros, Dimos and Guns, Tias},
  title =	{{Constraint Modelling with LLMs Using In-Context Learning}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{20:1--20:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.20},
  URN =		{urn:nbn:de:0030-drops-207053},
  doi =		{10.4230/LIPIcs.CP.2024.20},
  annote =	{Keywords: Constraint Modelling, Constraint Acquisition, Constraint Programming, Large Language Models, In-Context Learning, Natural Language Processing, Named Entity Recognition, Retrieval-Augmented Generation, Optimisation}
}
Document
Learning Lagrangian Multipliers for the Travelling Salesman Problem

Authors: Augustin Parjadis, Quentin Cappart, Bistra Dilkina, Aaron Ferber, and Louis-Martin Rousseau

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
Lagrangian relaxation is a versatile mathematical technique employed to relax constraints in an optimization problem, enabling the generation of dual bounds to prove the optimality of feasible solutions and the design of efficient propagators in constraint programming (such as the weighted circuit constraint). However, the conventional process of deriving Lagrangian multipliers (e.g., using subgradient methods) is often computationally intensive, limiting its practicality for large-scale or time-sensitive problems. To address this challenge, we propose an innovative unsupervised learning approach that harnesses the capabilities of graph neural networks to exploit the problem structure, aiming to generate accurate Lagrangian multipliers efficiently. We apply this technique to the well-known Held-Karp Lagrangian relaxation for the traveling salesman problem. The core idea is to predict accurate Lagrangian multipliers and to employ them as a warm start for generating Held-Karp relaxation bounds. These bounds are subsequently utilized to enhance the filtering process carried out by branch-and-bound algorithms. In contrast to much of the existing literature, which primarily focuses on finding feasible solutions, our approach operates on the dual side, demonstrating that learning can also accelerate the proof of optimality. We conduct experiments across various distributions of the metric traveling salesman problem, considering instances with up to 200 cities. The results illustrate that our approach can improve the filtering level of the weighted circuit global constraint, reduce the optimality gap by a factor two for unsolved instances up to a timeout, and reduce the execution time for solved instances by 10%.

Cite as

Augustin Parjadis, Quentin Cappart, Bistra Dilkina, Aaron Ferber, and Louis-Martin Rousseau. Learning Lagrangian Multipliers for the Travelling Salesman Problem. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 22:1-22:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{parjadis_et_al:LIPIcs.CP.2024.22,
  author =	{Parjadis, Augustin and Cappart, Quentin and Dilkina, Bistra and Ferber, Aaron and Rousseau, Louis-Martin},
  title =	{{Learning Lagrangian Multipliers for the Travelling Salesman Problem}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{22:1--22:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.22},
  URN =		{urn:nbn:de:0030-drops-207076},
  doi =		{10.4230/LIPIcs.CP.2024.22},
  annote =	{Keywords: Lagrangian relaxation, unsupervised learning, graph neural network}
}
Document
Learning Precedences for Scheduling Problems with Graph Neural Networks

Authors: Hélène Verhaeghe, Quentin Cappart, Gilles Pesant, and Claude-Guy Quimper

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
The resource constrained project scheduling problem (RCPSP) consists of scheduling a finite set of resource-consuming tasks within a temporal horizon subject to resource capacities and precedence relations between pairs of tasks. It is NP-hard and many techniques have been introduced to improve the efficiency of CP solvers to solve it. The problem is naturally represented as a directed graph, commonly referred to as the precedence graph, by linking pairs of tasks subject to a precedence. In this paper, we propose to leverage the ability of graph neural networks to extract knowledge from precedence graphs. This is carried out by learning new precedences that can be used either to add new constraints or to design a dedicated variable-selection heuristic. Experiments carried out on RCPSP instances from PSPLIB show the potential of learning to predict precedences and how they can help speed up the search for solutions by a CP solver.

Cite as

Hélène Verhaeghe, Quentin Cappart, Gilles Pesant, and Claude-Guy Quimper. Learning Precedences for Scheduling Problems with Graph Neural Networks. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 30:1-30:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{verhaeghe_et_al:LIPIcs.CP.2024.30,
  author =	{Verhaeghe, H\'{e}l\`{e}ne and Cappart, Quentin and Pesant, Gilles and Quimper, Claude-Guy},
  title =	{{Learning Precedences for Scheduling Problems with Graph Neural Networks}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{30:1--30:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.30},
  URN =		{urn:nbn:de:0030-drops-207150},
  doi =		{10.4230/LIPIcs.CP.2024.30},
  annote =	{Keywords: Scheduling, Precedence graph, Graph neural network}
}
  • Refine by Author
  • 3 Guruswami, Venkatesan
  • 3 Smith, Adam
  • 3 Zhou, Samson
  • 2 Assadi, Sepehr
  • 2 Blocki, Jeremiah
  • Show More...

  • Refine by Classification
  • 10 Theory of computation → Streaming, sublinear and near linear time algorithms
  • 9 Theory of computation → Approximation algorithms analysis
  • 7 Theory of computation → Design and analysis of algorithms
  • 6 Security and privacy → Information-theoretic techniques
  • 6 Theory of computation
  • Show More...

  • Refine by Keyword
  • 6 approximation algorithms
  • 5 differential privacy
  • 4 lower bounds
  • 3 Differential Privacy
  • 3 Differential privacy
  • Show More...

  • Refine by Type
  • 118 document
  • 2 volume

  • Refine by Publication Year
  • 71 2023
  • 28 2024
  • 20 2020
  • 1 2019

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail