32 Search Results for "Jain, Rahul"


Document
Random-Order Online Independent Set of Intervals and Hyperrectangles

Authors: Mohit Garg, Debajyoti Kar, and Arindam Khan

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In the Maximum Independent Set of Hyperrectangles problem, we are given a set of n (possibly overlapping) d-dimensional axis-aligned hyperrectangles, and the goal is to find a subset of non-overlapping hyperrectangles of maximum cardinality. For d = 1, this corresponds to the classical Interval Scheduling problem, where a simple greedy algorithm returns an optimal solution. In the offline setting, for d-dimensional hyperrectangles, polynomial time (log n)^{O(d)}-approximation algorithms are known [Chalermsook and Chuzhoy, 2009]. However, the problem becomes notably challenging in the online setting, where the input objects (hyperrectangles) appear one by one in an adversarial order, and on the arrival of an object, the algorithm needs to make an immediate and irrevocable decision whether or not to select the object while maintaining the feasibility. Even for interval scheduling, an Ω(n) lower bound is known on the competitive ratio. To circumvent these negative results, in this work, we study the online maximum independent set of axis-aligned hyperrectangles in the random-order arrival model, where the adversary specifies the set of input objects which then arrive in a uniformly random order. Starting from the prototypical secretary problem, the random-order model has received significant attention to study algorithms beyond the worst-case competitive analysis (see the survey by Gupta and Singla [Anupam Gupta and Sahil Singla, 2020]). Surprisingly, we show that the problem in the random-order model almost matches the best-known offline approximation guarantees, up to polylogarithmic factors. In particular, we give a simple (log n)^{O(d)}-competitive algorithm for d-dimensional hyperrectangles in this model, which runs in O_d̃(n) time. Our approach also yields (log n)^{O(d)}-competitive algorithms in the random-order model for more general objects such as d-dimensional fat objects and ellipsoids. Furthermore, all our competitiveness guarantees hold with high probability, and not just in expectation.

Cite as

Mohit Garg, Debajyoti Kar, and Arindam Khan. Random-Order Online Independent Set of Intervals and Hyperrectangles. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 58:1-58:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.ESA.2024.58,
  author =	{Garg, Mohit and Kar, Debajyoti and Khan, Arindam},
  title =	{{Random-Order Online Independent Set of Intervals and Hyperrectangles}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{58:1--58:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.58},
  URN =		{urn:nbn:de:0030-drops-211298},
  doi =		{10.4230/LIPIcs.ESA.2024.58},
  annote =	{Keywords: Online Algorithms, Random-Order Model, Maximum Independent Set of Rectangles, Hyperrectangles, Fat Objects, Interval Scheduling}
}
Document
APPROX
Universal Optimization for Non-Clairvoyant Subadditive Joint Replenishment

Authors: Tomer Ezra, Stefano Leonardi, Michał Pawłowski, Matteo Russo, and Seeun William Umboh

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
The online joint replenishment problem (JRP) is a fundamental problem in the area of online problems with delay. Over the last decade, several works have studied generalizations of JRP with different cost functions for servicing requests. Most prior works on JRP and its generalizations have focused on the clairvoyant setting. Recently, Touitou [Noam Touitou, 2023] developed a non-clairvoyant framework that provided an O(√{n log n}) upper bound for a wide class of generalized JRP, where n is the number of request types. We advance the study of non-clairvoyant algorithms by providing a simpler, modular framework that matches the competitive ratio established by Touitou for the same class of generalized JRP. Our key insight is to leverage universal algorithms for Set Cover to approximate arbitrary monotone subadditive functions using a simple class of functions termed disjoint. This allows us to reduce the problem to several independent instances of the TCP Acknowledgement problem, for which a simple 2-competitive non-clairvoyant algorithm is known. The modularity of our framework is a major advantage as it allows us to tailor the reduction to specific problems and obtain better competitive ratios. In particular, we obtain tight O(√n)-competitive algorithms for two significant problems: Multi-Level Aggregation and Weighted Symmetric Subadditive Joint Replenishment. We also show that, in contrast, Touitou’s algorithm is Ω(√{n log n})-competitive for both of these problems.

Cite as

Tomer Ezra, Stefano Leonardi, Michał Pawłowski, Matteo Russo, and Seeun William Umboh. Universal Optimization for Non-Clairvoyant Subadditive Joint Replenishment. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 12:1-12:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ezra_et_al:LIPIcs.APPROX/RANDOM.2024.12,
  author =	{Ezra, Tomer and Leonardi, Stefano and Paw{\l}owski, Micha{\l} and Russo, Matteo and Umboh, Seeun William},
  title =	{{Universal Optimization for Non-Clairvoyant Subadditive Joint Replenishment}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{12:1--12:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.12},
  URN =		{urn:nbn:de:0030-drops-210050},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.12},
  annote =	{Keywords: Set Cover, Joint Replenishment, TCP-Acknowledgment, Subadditive Function Approximation, Multi-Level Aggregation}
}
Document
Musketeer: Incentive-Compatible Rebalancing for Payment Channel Networks

Authors: Zeta Avarikioti, Stefan Schmid, and Samarth Tiwari

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
In this work, we revisit the severely limited throughput problem of cryptocurrencies and propose a novel rebalancing approach for Payment Channel Networks (PCNs). PCNs are a popular solution for increasing the blockchain throughput, however, their benefit depends on the overall users' liquidity. Rebalancing mechanisms are the state-of-the-art approach to maintaining high liquidity in PCNs. However, existing opt-in rebalancing mechanisms exclude users that may assist in rebalancing for small service fees, leading to suboptimal solutions and under-utilization of the PCNs' bounded liquidity. We introduce the first rebalancing approach for PCNs that includes all users, following a "all for one and one for all" design philosophy that yields optimal throughput. The proposed approach introduces a double-auction rebalancing problem, which we term Musketeer, where users can participate as buyers (paying fees to rebalance) or sellers (charging fees to route transactions). The desired properties tailored to the unique characteristics of PCNs are formally defined, including the novel game-theoretic property of cyclic budget balance that is a stronger variation of strong budget balance. Basic results derived from auction theory, including an impossibility and multiple mechanisms that either achieve all desiderata under a relaxed model or sacrifice one of the properties, are presented. We also propose a novel mechanism that leverages time delays as an additional cost to users. This mechanism is provably truthful, cyclic budget balanced, individually rational and economic efficient but only with respect to liquidity.

Cite as

Zeta Avarikioti, Stefan Schmid, and Samarth Tiwari. Musketeer: Incentive-Compatible Rebalancing for Payment Channel Networks. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 13:1-13:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{avarikioti_et_al:LIPIcs.AFT.2024.13,
  author =	{Avarikioti, Zeta and Schmid, Stefan and Tiwari, Samarth},
  title =	{{Musketeer: Incentive-Compatible Rebalancing for Payment Channel Networks}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{13:1--13:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.13},
  URN =		{urn:nbn:de:0030-drops-209494},
  doi =		{10.4230/LIPIcs.AFT.2024.13},
  annote =	{Keywords: Blockchains, Payment Channel Networks, Rebalancing, Game Theory}
}
Document
Quantum Delegation with an Off-The-Shelf Device

Authors: Anne Broadbent, Arthur Mehta, and Yuming Zhao

Published in: LIPIcs, Volume 310, 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)


Abstract
Given that reliable cloud quantum computers are becoming closer to reality, the concept of delegation of quantum computations and its verifiability is of central interest. Many models have been proposed, each with specific strengths and weaknesses. Here, we put forth a new model where the client trusts only its classical processing, makes no computational assumptions, and interacts with a quantum server in a single round. In addition, during a set-up phase, the client specifies the size n of the computation and receives an untrusted, off-the-shelf (OTS) quantum device that is used to report the outcome of a single measurement. We show how to delegate polynomial-time quantum computations in the OTS model. This also yields an interactive proof system for all of QMA, which, furthermore, we show can be accomplished in statistical zero-knowledge. This provides the first relativistic (one-round), two-prover zero-knowledge proof system for QMA. As a proof approach, we provide a new self-test for n EPR pairs using only constant-sized Pauli measurements, and show how it provides a new avenue for the use of simulatable codes for local Hamiltonian verification. Along the way, we also provide an enhanced version of a well-known stability result due to Gowers and Hatami and show how it completes a common argument used in self-testing.

Cite as

Anne Broadbent, Arthur Mehta, and Yuming Zhao. Quantum Delegation with an Off-The-Shelf Device. In 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 310, pp. 12:1-12:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{broadbent_et_al:LIPIcs.TQC.2024.12,
  author =	{Broadbent, Anne and Mehta, Arthur and Zhao, Yuming},
  title =	{{Quantum Delegation with an Off-The-Shelf Device}},
  booktitle =	{19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)},
  pages =	{12:1--12:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-328-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{310},
  editor =	{Magniez, Fr\'{e}d\'{e}ric and Grilo, Alex Bredariol},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2024.12},
  URN =		{urn:nbn:de:0030-drops-206824},
  doi =		{10.4230/LIPIcs.TQC.2024.12},
  annote =	{Keywords: Delegated quantum computation, zero-knowledge proofs, device-independence}
}
Document
Quantum Polynomial Hierarchies: Karp-Lipton, Error Reduction, and Lower Bounds

Authors: Avantika Agarwal, Sevag Gharibian, Venkata Koppula, and Dorian Rudolph

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
The Polynomial-Time Hierarchy (PH) is a staple of classical complexity theory, with applications spanning randomized computation to circuit lower bounds to "quantum advantage" analyses for near-term quantum computers. Quantumly, however, despite the fact that at least four definitions of quantum PH exist, it has been challenging to prove analogues for these of even basic facts from PH. This work studies three quantum-verifier based generalizations of PH, two of which are from [Gharibian, Santha, Sikora, Sundaram, Yirka, 2022] and use classical strings (QCPH) and quantum mixed states (QPH) as proofs, and one of which is new to this work, utilizing quantum pure states (QPHpure) as proofs. We first resolve several open problems from [GSSSY22], including a collapse theorem and a Karp-Lipton theorem for QCPH. Then, for our new class QPHpure, we show one-sided error reduction QPHpure, as well as the first bounds relating these quantum variants of PH, namely QCPH ⊆ QPHpure ⊆ EXP^PP.

Cite as

Avantika Agarwal, Sevag Gharibian, Venkata Koppula, and Dorian Rudolph. Quantum Polynomial Hierarchies: Karp-Lipton, Error Reduction, and Lower Bounds. In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 7:1-7:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{agarwal_et_al:LIPIcs.MFCS.2024.7,
  author =	{Agarwal, Avantika and Gharibian, Sevag and Koppula, Venkata and Rudolph, Dorian},
  title =	{{Quantum Polynomial Hierarchies: Karp-Lipton, Error Reduction, and Lower Bounds}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{7:1--7:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.7},
  URN =		{urn:nbn:de:0030-drops-205632},
  doi =		{10.4230/LIPIcs.MFCS.2024.7},
  annote =	{Keywords: Quantum complexity, polynomial hierarchy}
}
Document
A Technique for Hardness Amplification Against AC⁰

Authors: William M. Hoza

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We study hardness amplification in the context of two well-known "moderate" average-case hardness results for AC⁰ circuits. First, we investigate the extent to which AC⁰ circuits of depth d can approximate AC⁰ circuits of some larger depth d + k. The case k = 1 is resolved by Håstad, Rossman, Servedio, and Tan’s celebrated average-case depth hierarchy theorem (JACM 2017). Our contribution is a significantly stronger correlation bound when k ≥ 3. Specifically, we show that there exists a linear-size AC⁰_{d + k} circuit h : {0, 1}ⁿ → {0, 1} such that for every AC⁰_d circuit g, either g has size exp(n^{Ω(1/d)}), or else g agrees with h on at most a (1/2 + ε)-fraction of inputs where ε = exp(-(1/d) ⋅ Ω(log n)^{k-1}). For comparison, Håstad, Rossman, Servedio, and Tan’s result has ε = n^{-Θ(1/d)}. Second, we consider the majority function. It is well known that the majority function is moderately hard for AC⁰ circuits (and stronger classes). Our contribution is a stronger correlation bound for the XOR of t copies of the n-bit majority function, denoted MAJ_n^{⊕ t}. We show that if g is an AC⁰_d circuit of size S, then g agrees with MAJ_n^{⊕ t} on at most a (1/2 + ε)-fraction of inputs, where ε = (O(log S)^{d - 1} / √n)^t. To prove these results, we develop a hardness amplification technique that is tailored to a specific type of circuit lower bound proof. In particular, one way to show that a function h is moderately hard for AC⁰ circuits is to (a) design some distribution over random restrictions or random projections, (b) show that AC⁰ circuits simplify to shallow decision trees under these restrictions/projections, and finally (c) show that after applying the restriction/projection, h is moderately hard for shallow decision trees with respect to an appropriate distribution. We show that (roughly speaking) if h can be proven to be moderately hard by a proof with that structure, then XORing multiple copies of h amplifies its hardness. Our analysis involves a new kind of XOR lemma for decision trees, which might be of independent interest.

Cite as

William M. Hoza. A Technique for Hardness Amplification Against AC⁰. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 1:1-1:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hoza:LIPIcs.CCC.2024.1,
  author =	{Hoza, William M.},
  title =	{{A Technique for Hardness Amplification Against AC⁰}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{1:1--1:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.1},
  URN =		{urn:nbn:de:0030-drops-203977},
  doi =		{10.4230/LIPIcs.CCC.2024.1},
  annote =	{Keywords: Bounded-depth circuits, average-case lower bounds, hardness amplification, XOR lemmas}
}
Document
Derandomizing Logspace with a Small Shared Hard Drive

Authors: Edward Pyne

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We obtain new catalytic algorithms for space-bounded derandomization. In the catalytic computation model introduced by (Buhrman, Cleve, Koucký, Loff, and Speelman STOC 2013), we are given a small worktape, and a larger catalytic tape that has an arbitrary initial configuration. We may edit this tape, but it must be exactly restored to its initial configuration at the completion of the computation. We prove that BPSPACE[S] ⊆ CSPACE[S,S²] where BPSPACE[S] corresponds to randomized space S computation, and CSPACE[S,C] corresponds to catalytic algorithms that use O(S) bits of workspace and O(C) bits of catalytic space. Previously, only BPSPACE[S] ⊆ CSPACE[S,2^O(S)] was known. In fact, we prove a general tradeoff, that for every α ∈ [1,1.5], BPSPACE[S] ⊆ CSPACE[S^α,S^(3-α)]. We do not use the algebraic techniques of prior work on catalytic computation. Instead, we develop an algorithm that branches based on if the catalytic tape is conditionally random, and instantiate this primitive in a recursive framework. Our result gives an alternate proof of the best known time-space tradeoff for BPSPACE[S], due to (Cai, Chakaravarthy, and van Melkebeek, Theory Comput. Sys. 2006). As a final application, we extend our results to solve search problems in CSPACE[S,S²]. As far as we are aware, this constitutes the first study of search problems in the catalytic computing model.

Cite as

Edward Pyne. Derandomizing Logspace with a Small Shared Hard Drive. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 4:1-4:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{pyne:LIPIcs.CCC.2024.4,
  author =	{Pyne, Edward},
  title =	{{Derandomizing Logspace with a Small Shared Hard Drive}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{4:1--4:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.4},
  URN =		{urn:nbn:de:0030-drops-204006},
  doi =		{10.4230/LIPIcs.CCC.2024.4},
  annote =	{Keywords: Catalytic computation, space-bounded computation, derandomization}
}
Document
The Entangled Quantum Polynomial Hierarchy Collapses

Authors: Sabee Grewal and Justin Yirka

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We introduce the entangled quantum polynomial hierarchy, QEPH, as the class of problems that are efficiently verifiable given alternating quantum proofs that may be entangled with each other. We prove QEPH collapses to its second level. In fact, we show that a polynomial number of alternations collapses to just two. As a consequence, QEPH = QRG(1), the class of problems having one-turn quantum refereed games, which is known to be contained in PSPACE. This is in contrast to the unentangled quantum polynomial hierarchy, QPH, which contains QMA(2). We also introduce DistributionQCPH, a generalization of the quantum-classical polynomial hierarchy QCPH where the provers send probability distributions over strings (instead of strings). We prove DistributionQCPH = QCPH, suggesting that only quantum superposition (not classical probability) increases the computational power of these hierarchies. To prove this equality, we generalize a game-theoretic result of Lipton and Young (1994) which says that, without loss of generality, the provers can send uniform distributions over a polynomial-size support. We also prove the analogous result for the polynomial hierarchy, i.e., DistributionPH = PH. Finally, we show that PH and QCPH are contained in QPH, resolving an open question of Gharibian et al. (2022).

Cite as

Sabee Grewal and Justin Yirka. The Entangled Quantum Polynomial Hierarchy Collapses. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 6:1-6:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{grewal_et_al:LIPIcs.CCC.2024.6,
  author =	{Grewal, Sabee and Yirka, Justin},
  title =	{{The Entangled Quantum Polynomial Hierarchy Collapses}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{6:1--6:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.6},
  URN =		{urn:nbn:de:0030-drops-204028},
  doi =		{10.4230/LIPIcs.CCC.2024.6},
  annote =	{Keywords: Polynomial hierarchy, Entangled proofs, Correlated proofs, Minimax}
}
Document
A Strong Direct Sum Theorem for Distributional Query Complexity

Authors: Guy Blanc, Caleb Koch, Carmen Strassle, and Li-Yang Tan

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Consider the expected query complexity of computing the k-fold direct product f^{⊗ k} of a function f to error ε with respect to a distribution μ^k. One strategy is to sequentially compute each of the k copies to error ε/k with respect to μ and apply the union bound. We prove a strong direct sum theorem showing that this naive strategy is essentially optimal. In particular, computing a direct product necessitates a blowup in both query complexity and error. Strong direct sum theorems contrast with results that only show a blowup in query complexity or error but not both. There has been a long line of such results for distributional query complexity, dating back to (Impagliazzo, Raz, Wigderson 1994) and (Nisan, Rudich, Saks 1994), but a strong direct sum theorem that holds for all functions in the standard query model had been elusive. A key idea in our work is the first use of the Hardcore Theorem (Impagliazzo 1995) in the context of query complexity. We prove a new resilience lemma that accompanies it, showing that the hardcore of f^{⊗k} is likely to remain dense under arbitrary partitions of the input space.

Cite as

Guy Blanc, Caleb Koch, Carmen Strassle, and Li-Yang Tan. A Strong Direct Sum Theorem for Distributional Query Complexity. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 16:1-16:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{blanc_et_al:LIPIcs.CCC.2024.16,
  author =	{Blanc, Guy and Koch, Caleb and Strassle, Carmen and Tan, Li-Yang},
  title =	{{A Strong Direct Sum Theorem for Distributional Query Complexity}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{16:1--16:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.16},
  URN =		{urn:nbn:de:0030-drops-204123},
  doi =		{10.4230/LIPIcs.CCC.2024.16},
  annote =	{Keywords: Query complexity, direct product theorem, hardcore theorem}
}
Document
Information Dissemination via Broadcasts in the Presence of Adversarial Noise

Authors: Klim Efremenko, Gillat Kol, Dmitry Paramonov, Ran Raz, and Raghuvansh R. Saxena

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We initiate the study of error correcting codes over the multi-party adversarial broadcast channel. Specifically, we consider the classic information dissemination problem where n parties, each holding an input bit, wish to know each other’s input. For this, they communicate in rounds, where, in each round, one designated party sends a bit to all other parties over a channel governed by an adversary that may corrupt a constant fraction of the received communication. We mention that the dissemination problem was studied in the stochastic noise model since the 80’s. While stochastic noise in multi-party channels has received quite a bit of attention, the case of adversarial noise has largely been avoided, as such channels cannot handle more than a 1/n-fraction of errors. Indeed, this many errors allow an adversary to completely corrupt the incoming or outgoing communication for one of the parties and fail the protocol. Curiously, we show that by eliminating these "trivial" attacks, one can get a simple protocol resilient to a constant fraction of errors. Thus, a model that rules out such attacks is both necessary and sufficient to get a resilient protocol. The main shortcoming of our dissemination protocol is its length: it requires Θ(n²) communication rounds whereas n rounds suffice in the absence of noise. Our main result is a matching lower bound of Ω(n²) on the length of any dissemination protocol in our model. Our proof first "gets rid" of the channel noise by converting it to a form of "input noise", showing that a noisy dissemination protocol implies a (noiseless) protocol for a version of the direct sum gap-majority problem. We conclude the proof with a tight lower bound for the latter problem, which may be of independent interest.

Cite as

Klim Efremenko, Gillat Kol, Dmitry Paramonov, Ran Raz, and Raghuvansh R. Saxena. Information Dissemination via Broadcasts in the Presence of Adversarial Noise. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 19:1-19:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{efremenko_et_al:LIPIcs.CCC.2024.19,
  author =	{Efremenko, Klim and Kol, Gillat and Paramonov, Dmitry and Raz, Ran and Saxena, Raghuvansh R.},
  title =	{{Information Dissemination via Broadcasts in the Presence of Adversarial Noise}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{19:1--19:33},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.19},
  URN =		{urn:nbn:de:0030-drops-204159},
  doi =		{10.4230/LIPIcs.CCC.2024.19},
  annote =	{Keywords: Radio Networks, Interactive Coding, Error Correcting Codes}
}
Document
Gap MCSP Is Not (Levin) NP-Complete in Obfustopia

Authors: Noam Mazor and Rafael Pass

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We demonstrate that under believable cryptographic hardness assumptions, Gap versions of standard meta-complexity problems, such as the Minimum Circuit Size Problem (MCSP) and the Minimum Time-Bounded Kolmogorov Complexity problem (MKTP) are not NP-complete w.r.t. Levin (i.e., witness-preserving many-to-one) reductions. In more detail: - Assuming the existence of indistinguishability obfuscation, and subexponentially-secure one-way functions, an appropriate Gap version of MCSP is not NP-complete under randomized Levin-reductions. - Assuming the existence of subexponentially-secure indistinguishability obfuscation, subexponentially-secure one-way functions and injective PRGs, an appropriate Gap version of MKTP is not NP-complete under randomized Levin-reductions.

Cite as

Noam Mazor and Rafael Pass. Gap MCSP Is Not (Levin) NP-Complete in Obfustopia. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 36:1-36:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mazor_et_al:LIPIcs.CCC.2024.36,
  author =	{Mazor, Noam and Pass, Rafael},
  title =	{{Gap MCSP Is Not (Levin) NP-Complete in Obfustopia}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{36:1--36:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.36},
  URN =		{urn:nbn:de:0030-drops-204322},
  doi =		{10.4230/LIPIcs.CCC.2024.36},
  annote =	{Keywords: Kolmogorov complexity, MCSP, Levin Reduction}
}
Document
Engineering Weighted Connectivity Augmentation Algorithms

Authors: Marcelo Fonseca Faraj, Ernestine Großmann, Felix Joos, Thomas Möller, and Christian Schulz

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
Increasing the connectivity of a graph is a pivotal challenge in robust network design. The weighted connectivity augmentation problem is a common version of the problem that takes link costs into consideration. The problem is then to find a minimum cost subset of a given set of weighted links that increases the connectivity of a graph by one when the links are added to the edge set of the input instance. In this work, we give a first implementation of recently discovered better-than-2 approximations. Furthermore, we propose three new heuristics and one exact approach. These include a greedy algorithm considering link costs and the number of unique cuts covered, an approach based on minimum spanning trees and a local search algorithm that may improve a given solution by swapping links of paths. Our exact approach uses an ILP formulation with efficient cut enumeration as well as a fast initialization routine. We then perform an extensive experimental evaluation which shows that our algorithms are faster and yield the best solutions compared to the current state-of-the-art as well as the recently discovered better-than-2 approximation algorithms. Our novel local search algorithm can improve solution quality even further.

Cite as

Marcelo Fonseca Faraj, Ernestine Großmann, Felix Joos, Thomas Möller, and Christian Schulz. Engineering Weighted Connectivity Augmentation Algorithms. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 11:1-11:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{faraj_et_al:LIPIcs.SEA.2024.11,
  author =	{Faraj, Marcelo Fonseca and Gro{\ss}mann, Ernestine and Joos, Felix and M\"{o}ller, Thomas and Schulz, Christian},
  title =	{{Engineering Weighted Connectivity Augmentation Algorithms}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{11:1--11:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.11},
  URN =		{urn:nbn:de:0030-drops-203768},
  doi =		{10.4230/LIPIcs.SEA.2024.11},
  annote =	{Keywords: weighted connectivity augmentation, approximation, heuristic, integer linear program, algorithm engineering}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Smoothed Analysis of Deterministic Discounted and Mean-Payoff Games

Authors: Bruno Loff and Mateusz Skomra

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We devise a policy-iteration algorithm for deterministic two-player discounted and mean-payoff games, that runs in polynomial time with high probability, on any input where each payoff is chosen independently from a sufficiently random distribution and the underlying graph of the game is ergodic. This includes the case where an arbitrary set of payoffs has been perturbed by a Gaussian, showing for the first time that deterministic two-player games can be solved efficiently, in the sense of smoothed analysis. More generally, we devise a condition number for deterministic discounted and mean-payoff games played on ergodic graphs, and show that our algorithm runs in time polynomial in this condition number. Our result confirms a previous conjecture of Boros et al., which was claimed as a theorem [Boros et al., 2011] and later retracted [Boros et al., 2018]. It stands in contrast with a recent counter-example by Christ and Yannakakis [Christ and Yannakakis, 2023], showing that Howard’s policy-iteration algorithm does not run in smoothed polynomial time on stochastic single-player mean-payoff games. Our approach is inspired by the analysis of random optimal assignment instances by Frieze and Sorkin [Frieze and Sorkin, 2007], and the analysis of bias-induced policies for mean-payoff games by Akian, Gaubert and Hochart [Akian et al., 2018].

Cite as

Bruno Loff and Mateusz Skomra. Smoothed Analysis of Deterministic Discounted and Mean-Payoff Games. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 147:1-147:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{loff_et_al:LIPIcs.ICALP.2024.147,
  author =	{Loff, Bruno and Skomra, Mateusz},
  title =	{{Smoothed Analysis of Deterministic Discounted and Mean-Payoff Games}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{147:1--147:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.147},
  URN =		{urn:nbn:de:0030-drops-202908},
  doi =		{10.4230/LIPIcs.ICALP.2024.147},
  annote =	{Keywords: Mean-payoff games, discounted games, policy iteration, smoothed analysis}
}
Document
Track A: Algorithms, Complexity and Games
Two Choices Are Enough for P-LCPs, USOs, and Colorful Tangents

Authors: Michaela Borzechowski, John Fearnley, Spencer Gordon, Rahul Savani, Patrick Schnider, and Simon Weber

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We provide polynomial-time reductions between three search problems from three distinct areas: the P-matrix linear complementarity problem (P-LCP), finding the sink of a unique sink orientation (USO), and a variant of the α-Ham Sandwich problem. For all three settings, we show that "two choices are enough", meaning that the general non-binary version of the problem can be reduced in polynomial time to the binary version. This specifically means that generalized P-LCPs are equivalent to P-LCPs, and grid USOs are equivalent to cube USOs. These results are obtained by showing that both the P-LCP and our α-Ham Sandwich variant are equivalent to a new problem we introduce, P-Lin-Bellman. This problem can be seen as a new tool for formulating problems as P-LCPs.

Cite as

Michaela Borzechowski, John Fearnley, Spencer Gordon, Rahul Savani, Patrick Schnider, and Simon Weber. Two Choices Are Enough for P-LCPs, USOs, and Colorful Tangents. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 32:1-32:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{borzechowski_et_al:LIPIcs.ICALP.2024.32,
  author =	{Borzechowski, Michaela and Fearnley, John and Gordon, Spencer and Savani, Rahul and Schnider, Patrick and Weber, Simon},
  title =	{{Two Choices Are Enough for P-LCPs, USOs, and Colorful Tangents}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{32:1--32:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.32},
  URN =		{urn:nbn:de:0030-drops-201751},
  doi =		{10.4230/LIPIcs.ICALP.2024.32},
  annote =	{Keywords: P-LCP, Unique Sink Orientation, \alpha-Ham Sandwich, search complexity, TFNP, UEOPL}
}
Document
Total NP Search Problems with Abundant Solutions

Authors: Jiawei Li

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We define a new complexity class TFAP to capture TFNP problems that possess abundant solutions for each input. We identify several problems across diverse fields that belong to TFAP, including WeakPigeon (finding a collision in a mapping from [2n] pigeons to [n] holes), Yamakawa-Zhandry’s problem [Takashi Yamakawa and Mark Zhandry, 2022], and all problems in TFZPP. Conversely, we introduce the notion of "semi-gluability" to characterize TFNP problems that could have a unique or a very limited number of solutions for certain inputs. We prove that there is no black-box reduction from any "semi-gluable" problems to any TFAP problems. Furthermore, it can be extended to rule out randomized black-box reduction in most cases. We identify that the majority of common TFNP subclasses, including PPA, PPAD, PPADS, PPP, PLS, CLS, SOPL, and UEOPL, are "semi-gluable". This leads to a broad array of oracle separation results within TFNP regime. As a corollary, UEOPL^O ⊈ PWPP^O relative to an oracle O.

Cite as

Jiawei Li. Total NP Search Problems with Abundant Solutions. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 75:1-75:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{li:LIPIcs.ITCS.2024.75,
  author =	{Li, Jiawei},
  title =	{{Total NP Search Problems with Abundant Solutions}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{75:1--75:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.75},
  URN =		{urn:nbn:de:0030-drops-196031},
  doi =		{10.4230/LIPIcs.ITCS.2024.75},
  annote =	{Keywords: TFNP, Pigeonhole Principle}
}
  • Refine by Author
  • 14 Jain, Rahul
  • 6 Tewari, Raghunath
  • 4 Gupta, Chetan
  • 3 Anshu, Anurag
  • 3 Kundu, Srijita
  • Show More...

  • Refine by Classification
  • 5 Theory of computation → Quantum complexity theory
  • 3 Theory of computation → Communication complexity
  • 3 Theory of computation → Complexity classes
  • 3 Theory of computation → Computational geometry
  • 2 Mathematics of computing → Graph algorithms
  • Show More...

  • Refine by Keyword
  • 3 communication complexity
  • 3 query complexity
  • 2 Catalytic computation
  • 2 Logspace
  • 2 Matching
  • Show More...

  • Refine by Type
  • 32 document

  • Refine by Publication Year
  • 15 2024
  • 5 2021
  • 4 2019
  • 2 2018
  • 1 2013
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail