19 Search Results for "Lee, Justin"


Document
Hardness of Approximating Bounded-Degree Max 2-CSP and Independent Set on k-Claw-Free Graphs

Authors: Euiwoong Lee and Pasin Manurangsi

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We consider the question of approximating Max 2-CSP where each variable appears in at most d constraints (but with possibly arbitrarily large alphabet). There is a simple ((d+1)/2)-approximation algorithm for the problem. We prove the following results for any sufficiently large d: - Assuming the Unique Games Conjecture (UGC), it is NP-hard (under randomized reduction) to approximate this problem to within a factor of (d/2 - o(d)). - It is NP-hard (under randomized reduction) to approximate the problem to within a factor of (d/3 - o(d)). Thanks to a known connection [Pavel Dvorák et al., 2023], we establish the following hardness results for approximating Maximum Independent Set on k-claw-free graphs: - Assuming the Unique Games Conjecture (UGC), it is NP-hard (under randomized reduction) to approximate this problem to within a factor of (k/4 - o(k)). - It is NP-hard (under randomized reduction) to approximate the problem to within a factor of (k/(3 + 2√2) - o(k)) ≥ (k/(5.829) - o(k)). In comparison, known approximation algorithms achieve (k/2 - o(k))-approximation in polynomial time [Meike Neuwohner, 2021; Theophile Thiery and Justin Ward, 2023] and (k/3 + o(k))-approximation in quasi-polynomial time [Marek Cygan et al., 2013].

Cite as

Euiwoong Lee and Pasin Manurangsi. Hardness of Approximating Bounded-Degree Max 2-CSP and Independent Set on k-Claw-Free Graphs. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 71:1-71:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{lee_et_al:LIPIcs.ITCS.2024.71,
  author =	{Lee, Euiwoong and Manurangsi, Pasin},
  title =	{{Hardness of Approximating Bounded-Degree Max 2-CSP and Independent Set on k-Claw-Free Graphs}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{71:1--71:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.71},
  URN =		{urn:nbn:de:0030-drops-195996},
  doi =		{10.4230/LIPIcs.ITCS.2024.71},
  annote =	{Keywords: Hardness of Approximation, Bounded Degree, Constraint Satisfaction Problems, Independent Set}
}
Document
On Converses to the Polynomial Method

Authors: Jop Briët and Francisco Escudero Gutiérrez

Published in: LIPIcs, Volume 232, 17th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2022)


Abstract
A surprising "converse to the polynomial method" of Aaronson et al. (CCC'16) shows that any bounded quadratic polynomial can be computed exactly in expectation by a 1-query algorithm up to a universal multiplicative factor related to the famous Grothendieck constant. A natural question posed there asks if bounded quartic polynomials can be approximated by 2-query quantum algorithms. Arunachalam, Palazuelos and the first author showed that there is no direct analogue of the result of Aaronson et al. in this case. We improve on this result in the following ways: First, we point out and fix a small error in the construction that has to do with a translation from cubic to quartic polynomials. Second, we give a completely explicit example based on techniques from additive combinatorics. Third, we show that the result still holds when we allow for a small additive error. For this, we apply an SDP characterization of Gribling and Laurent (QIP'19) for the completely-bounded approximate degree.

Cite as

Jop Briët and Francisco Escudero Gutiérrez. On Converses to the Polynomial Method. In 17th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 232, pp. 6:1-6:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{briet_et_al:LIPIcs.TQC.2022.6,
  author =	{Bri\"{e}t, Jop and Escudero Guti\'{e}rrez, Francisco},
  title =	{{On Converses to the Polynomial Method}},
  booktitle =	{17th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2022)},
  pages =	{6:1--6:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-237-2},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{232},
  editor =	{Le Gall, Fran\c{c}ois and Morimae, Tomoyuki},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2022.6},
  URN =		{urn:nbn:de:0030-drops-165139},
  doi =		{10.4230/LIPIcs.TQC.2022.6},
  annote =	{Keywords: Quantum query complexity, polynomial method, completely bounded polynomials}
}
Document
Bounded Indistinguishability for Simple Sources

Authors: Andrej Bogdanov, Krishnamoorthy Dinesh, Yuval Filmus, Yuval Ishai, Avi Kaplan, and Akshayaram Srinivasan

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
A pair of sources X, Y over {0,1}ⁿ are k-indistinguishable if their projections to any k coordinates are identically distributed. Can some AC^0 function distinguish between two such sources when k is big, say k = n^{0.1}? Braverman’s theorem (Commun. ACM 2011) implies a negative answer when X is uniform, whereas Bogdanov et al. (Crypto 2016) observe that this is not the case in general. We initiate a systematic study of this question for natural classes of low-complexity sources, including ones that arise in cryptographic applications, obtaining positive results, negative results, and barriers. In particular: - There exist Ω(√n)-indistinguishable X, Y, samplable by degree-O(log n) polynomial maps (over F₂) and by poly(n)-size decision trees, that are Ω(1)-distinguishable by OR. - There exists a function f such that all f(d, ε)-indistinguishable X, Y that are samplable by degree-d polynomial maps are ε-indistinguishable by OR for all sufficiently large n. Moreover, f(1, ε) = ⌈log(1/ε)⌉ + 1 and f(2, ε) = O(log^{10}(1/ε)). - Extending (weaker versions of) the above negative results to AC^0 distinguishers would require settling a conjecture of Servedio and Viola (ECCC 2012). Concretely, if every pair of n^{0.9}-indistinguishable X, Y that are samplable by linear maps is ε-indistinguishable by AC^0 circuits, then the binary inner product function can have at most an ε-correlation with AC^0 ◦ ⊕ circuits. Finally, we motivate the question and our results by presenting applications of positive results to low-complexity secret sharing and applications of negative results to leakage-resilient cryptography.

Cite as

Andrej Bogdanov, Krishnamoorthy Dinesh, Yuval Filmus, Yuval Ishai, Avi Kaplan, and Akshayaram Srinivasan. Bounded Indistinguishability for Simple Sources. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 26:1-26:18, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bogdanov_et_al:LIPIcs.ITCS.2022.26,
  author =	{Bogdanov, Andrej and Dinesh, Krishnamoorthy and Filmus, Yuval and Ishai, Yuval and Kaplan, Avi and Srinivasan, Akshayaram},
  title =	{{Bounded Indistinguishability for Simple Sources}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{26:1--26:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.26},
  URN =		{urn:nbn:de:0030-drops-156223},
  doi =		{10.4230/LIPIcs.ITCS.2022.26},
  annote =	{Keywords: Pseudorandomness, bounded indistinguishability, complexity of sampling, constant-depth circuits, secret sharing, leakage-resilient cryptography}
}
Document
The Quantum Supremacy Tsirelson Inequality

Authors: William Kretschmer

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
A leading proposal for verifying near-term quantum supremacy experiments on noisy random quantum circuits is linear cross-entropy benchmarking. For a quantum circuit C on n qubits and a sample z ∈ {0,1}ⁿ, the benchmark involves computing |⟨z|C|0ⁿ⟩|², i.e. the probability of measuring z from the output distribution of C on the all zeros input. Under a strong conjecture about the classical hardness of estimating output probabilities of quantum circuits, no polynomial-time classical algorithm given C can output a string z such that |⟨z|C|0ⁿ⟩|² is substantially larger than 1/(2ⁿ) (Aaronson and Gunn, 2019). On the other hand, for a random quantum circuit C, sampling z from the output distribution of C achieves |⟨z|C|0ⁿ⟩|² ≈ 2/(2ⁿ) on average (Arute et al., 2019). In analogy with the Tsirelson inequality from quantum nonlocal correlations, we ask: can a polynomial-time quantum algorithm do substantially better than 2/(2ⁿ)? We study this question in the query (or black box) model, where the quantum algorithm is given oracle access to C. We show that, for any ε ≥ 1/poly(n), outputting a sample z such that |⟨z|C|0ⁿ⟩|² ≥ (2 + ε)/2ⁿ on average requires at least Ω((2^{n/4})/poly(n)) queries to C, but not more than O (2^{n/3}) queries to C, if C is either a Haar-random n-qubit unitary, or a canonical state preparation oracle for a Haar-random n-qubit state. We also show that when C samples from the Fourier distribution of a random Boolean function, the naive algorithm that samples from C is the optimal 1-query algorithm for maximizing |⟨z|C|0ⁿ⟩|² on average.

Cite as

William Kretschmer. The Quantum Supremacy Tsirelson Inequality. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 13:1-13:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kretschmer:LIPIcs.ITCS.2021.13,
  author =	{Kretschmer, William},
  title =	{{The Quantum Supremacy Tsirelson Inequality}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{13:1--13:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.13},
  URN =		{urn:nbn:de:0030-drops-135524},
  doi =		{10.4230/LIPIcs.ITCS.2021.13},
  annote =	{Keywords: quantum supremacy, quantum query complexity, random circuit sampling}
}
Document
Error Correcting Codes for Uncompressed Messages

Authors: Ofer Grossman and Justin Holmgren

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
Most types of messages we transmit (e.g., video, audio, images, text) are not fully compressed, since they do not have known efficient and information theoretically optimal compression algorithms. When transmitting such messages, standard error correcting codes fail to take advantage of the fact that messages are not fully compressed. We show that in this setting, it is sub-optimal to use standard error correction. We consider a model where there is a set of "valid messages" which the sender may send that may not be efficiently compressible, but where it is possible for the receiver to recognize valid messages. In this model, we construct a (probabilistic) encoding procedure that achieves better tradeoffs between data rates and error-resilience (compared to just applying a standard error correcting code). Additionally, our techniques yield improved efficiently decodable (probabilistic) codes for fully compressed messages (the standard setting where the set of valid messages is all binary strings) in the high-rate regime.

Cite as

Ofer Grossman and Justin Holmgren. Error Correcting Codes for Uncompressed Messages. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 43:1-43:18, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{grossman_et_al:LIPIcs.ITCS.2021.43,
  author =	{Grossman, Ofer and Holmgren, Justin},
  title =	{{Error Correcting Codes for Uncompressed Messages}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{43:1--43:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.43},
  URN =		{urn:nbn:de:0030-drops-135828},
  doi =		{10.4230/LIPIcs.ITCS.2021.43},
  annote =	{Keywords: Coding Theory, List Decoding}
}
Document
Counterexamples to the Low-Degree Conjecture

Authors: Justin Holmgren and Alexander S. Wein

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
A conjecture of Hopkins (2018) posits that for certain high-dimensional hypothesis testing problems, no polynomial-time algorithm can outperform so-called "simple statistics", which are low-degree polynomials in the data. This conjecture formalizes the beliefs surrounding a line of recent work that seeks to understand statistical-versus-computational tradeoffs via the low-degree likelihood ratio. In this work, we refute the conjecture of Hopkins. However, our counterexample crucially exploits the specifics of the noise operator used in the conjecture, and we point out a simple way to modify the conjecture to rule out our counterexample. We also give an example illustrating that (even after the above modification), the symmetry assumption in the conjecture is necessary. These results do not undermine the low-degree framework for computational lower bounds, but rather aim to better understand what class of problems it is applicable to.

Cite as

Justin Holmgren and Alexander S. Wein. Counterexamples to the Low-Degree Conjecture. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 75:1-75:9, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{holmgren_et_al:LIPIcs.ITCS.2021.75,
  author =	{Holmgren, Justin and Wein, Alexander S.},
  title =	{{Counterexamples to the Low-Degree Conjecture}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{75:1--75:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.75},
  URN =		{urn:nbn:de:0030-drops-136148},
  doi =		{10.4230/LIPIcs.ITCS.2021.75},
  annote =	{Keywords: Low-degree likelihood ratio, error-correcting codes}
}
Document
Read Mapping on Genome Variation Graphs

Authors: Kavya Vaddadi, Rajgopal Srinivasan, and Naveen Sivadasan

Published in: LIPIcs, Volume 143, 19th International Workshop on Algorithms in Bioinformatics (WABI 2019)


Abstract
Genome variation graphs are natural candidates to represent a pangenome collection. In such graphs, common subsequences are encoded as vertices and the genomic variations are captured by introducing additional labeled vertices and directed edges. Unlike a linear reference, a reference graph allows a rich representation of the genomic diversities and avoids reference bias. We address the fundamental problem of mapping reads to genome variation graphs. We give a novel mapping algorithm V-MAP for efficient identification of small subgraph of the genome graph for optimal gapped alignment of the read. V-MAP creates space efficient index using locality sensitive minimizer signatures computed using a novel graph winnowing and graph embedding onto metric space for fast and accurate mapping. Experiments involving graph constructed from the 1000 Genomes data and using both real and simulated reads show that V-MAP is fast, memory efficient and can map short reads, as well as PacBio/Nanopore long reads with high accuracy. V-MAP performance was significantly better than the state-of-the-art, especially for long reads.

Cite as

Kavya Vaddadi, Rajgopal Srinivasan, and Naveen Sivadasan. Read Mapping on Genome Variation Graphs. In 19th International Workshop on Algorithms in Bioinformatics (WABI 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 143, pp. 7:1-7:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{vaddadi_et_al:LIPIcs.WABI.2019.7,
  author =	{Vaddadi, Kavya and Srinivasan, Rajgopal and Sivadasan, Naveen},
  title =	{{Read Mapping on Genome Variation Graphs}},
  booktitle =	{19th International Workshop on Algorithms in Bioinformatics (WABI 2019)},
  pages =	{7:1--7:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-123-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{143},
  editor =	{Huber, Katharina T. and Gusfield, Dan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2019.7},
  URN =		{urn:nbn:de:0030-drops-110375},
  doi =		{10.4230/LIPIcs.WABI.2019.7},
  annote =	{Keywords: read mapping, pangenome, genome variation graphs, locality sensitive hashing}
}
Document
Blame Tracking and Type Error Debugging

Authors: Sheng Chen and John Peter Campora III

Published in: LIPIcs, Volume 136, 3rd Summit on Advances in Programming Languages (SNAPL 2019)


Abstract
In this work, we present an unexpected connection between gradual typing and type error debugging. Namely, we illustrate that gradual typing provides a natural way to defer type errors in statically ill-typed programs, providing more feedback than traditional approaches to deferring type errors. When evaluating expressions that lead to runtime type errors, the usefulness of the feedback depends on blame tracking, the defacto approach to locating the cause of such runtime type errors. Unfortunately, blame tracking suffers from the bias problem for type error localization in languages with type inference. We illustrate and formalize the bias problem for blame tracking, present ideas for adapting existing type error debugging techniques to combat this bias, and outline further challenges.

Cite as

Sheng Chen and John Peter Campora III. Blame Tracking and Type Error Debugging. In 3rd Summit on Advances in Programming Languages (SNAPL 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 136, pp. 2:1-2:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.SNAPL.2019.2,
  author =	{Chen, Sheng and Campora III, John Peter},
  title =	{{Blame Tracking and Type Error Debugging}},
  booktitle =	{3rd Summit on Advances in Programming Languages (SNAPL 2019)},
  pages =	{2:1--2:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-113-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{136},
  editor =	{Lerner, Benjamin S. and Bod{\'\i}k, Rastislav and Krishnamurthi, Shriram},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SNAPL.2019.2},
  URN =		{urn:nbn:de:0030-drops-105451},
  doi =		{10.4230/LIPIcs.SNAPL.2019.2},
  annote =	{Keywords: Blame tracking, type error debugging, gradual typing, type inference}
}
Document
Toward Domain-Specific Solvers for Distributed Consistency

Authors: Lindsey Kuper and Peter Alvaro

Published in: LIPIcs, Volume 136, 3rd Summit on Advances in Programming Languages (SNAPL 2019)


Abstract
To guard against machine failures, modern internet services store multiple replicas of the same application data within and across data centers, which introduces the problem of keeping geo-distributed replicas consistent with one another in the face of network partitions and unpredictable message latency. To avoid costly and conservative synchronization protocols, many real-world systems provide only weak consistency guarantees (e.g., eventual, causal, or PRAM consistency), which permit certain kinds of disagreement among replicas. There has been much recent interest in language support for specifying and verifying such consistency properties. Although these properties are usually beyond the scope of what traditional type checkers or compiler analyses can guarantee, solver-aided languages are up to the task. Inspired by systems like Liquid Haskell [Vazou et al., 2014] and Rosette [Torlak and Bodik, 2014], we believe that close integration between a language and a solver is the right path to consistent-by-construction distributed applications. Unfortunately, verifying distributed consistency properties requires reasoning about transitive relations (e.g., causality or happens-before), partial orders (e.g., the lattice of replica states under a convergent merge operation), and properties relevant to message processing or API invocation (e.g., commutativity and idempotence) that cannot be easily or efficiently carried out by general-purpose SMT solvers that lack native support for this kind of reasoning. We argue that domain-specific SMT-based tools that exploit the mathematical foundations of distributed consistency would enable both more efficient verification and improved ease of use for domain experts. The principle of exploiting domain knowledge for efficiency and expressivity that has borne fruit elsewhere - such as in the development of high-performance domain-specific languages that trade off generality to gain both performance and productivity - also applies here. Languages augmented with domain-specific, consistency-aware solvers would support the rapid implementation of formally verified programming abstractions that guarantee distributed consistency. In the long run, we aim to democratize the development of such domain-specific solvers by creating a framework for domain-specific solver development that brings new theory solver implementation within the reach of programmers who are not necessarily SMT solver internals experts.

Cite as

Lindsey Kuper and Peter Alvaro. Toward Domain-Specific Solvers for Distributed Consistency. In 3rd Summit on Advances in Programming Languages (SNAPL 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 136, pp. 10:1-10:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kuper_et_al:LIPIcs.SNAPL.2019.10,
  author =	{Kuper, Lindsey and Alvaro, Peter},
  title =	{{Toward Domain-Specific Solvers for Distributed Consistency}},
  booktitle =	{3rd Summit on Advances in Programming Languages (SNAPL 2019)},
  pages =	{10:1--10:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-113-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{136},
  editor =	{Lerner, Benjamin S. and Bod{\'\i}k, Rastislav and Krishnamurthi, Shriram},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SNAPL.2019.10},
  URN =		{urn:nbn:de:0030-drops-105530},
  doi =		{10.4230/LIPIcs.SNAPL.2019.10},
  annote =	{Keywords: distributed consistency, SMT solving, theory solvers}
}
Document
How to Avoid Making a Billion-Dollar Mistake: Type-Safe Data Plane Programming with SafeP4

Authors: Matthias Eichholz, Eric Campbell, Nate Foster, Guido Salvaneschi, and Mira Mezini

Published in: LIPIcs, Volume 134, 33rd European Conference on Object-Oriented Programming (ECOOP 2019)


Abstract
The P4 programming language offers high-level, declarative abstractions that bring the flexibility of software to the domain of networking. Unfortunately, the main abstraction used to represent packet data in P4, namely header types, lacks basic safety guarantees. Over the last few years, experience with an increasing number of programs has shown the risks of the unsafe approach, which often leads to subtle software bugs. This paper proposes SafeP4, a domain-specific language for programmable data planes in which all packet data is guaranteed to have a well-defined meaning and satisfy essential safety guarantees. We equip SafeP4 with a formal semantics and a static type system that statically guarantees header validity - a common source of safety bugs according to our analysis of real-world P4 programs. Statically ensuring header validity is challenging because the set of valid headers can be modified at runtime, making it a dynamic program property. Our type system achieves static safety by using a form of path-sensitive reasoning that tracks dynamic information from conditional statements, routing tables, and the control plane. Our evaluation shows that SafeP4’s type system can effectively eliminate common failures in many real-world programs.

Cite as

Matthias Eichholz, Eric Campbell, Nate Foster, Guido Salvaneschi, and Mira Mezini. How to Avoid Making a Billion-Dollar Mistake: Type-Safe Data Plane Programming with SafeP4. In 33rd European Conference on Object-Oriented Programming (ECOOP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 134, pp. 12:1-12:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{eichholz_et_al:LIPIcs.ECOOP.2019.12,
  author =	{Eichholz, Matthias and Campbell, Eric and Foster, Nate and Salvaneschi, Guido and Mezini, Mira},
  title =	{{How to Avoid Making a Billion-Dollar Mistake: Type-Safe Data Plane Programming with SafeP4}},
  booktitle =	{33rd European Conference on Object-Oriented Programming (ECOOP 2019)},
  pages =	{12:1--12:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-111-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{134},
  editor =	{Donaldson, Alastair F.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2019.12},
  URN =		{urn:nbn:de:0030-drops-108041},
  doi =		{10.4230/LIPIcs.ECOOP.2019.12},
  annote =	{Keywords: P4, data plane programming, type systems}
}
Document
Transient Typechecks Are (Almost) Free

Authors: Richard Roberts, Stefan Marr, Michael Homer, and James Noble

Published in: LIPIcs, Volume 134, 33rd European Conference on Object-Oriented Programming (ECOOP 2019)


Abstract
Transient gradual typing imposes run-time type tests that typically cause a linear slowdown. This performance impact discourages the use of type annotations because adding types to a program makes the program slower. A virtual machine can employ standard just-in-time optimizations to reduce the overhead of transient checks to near zero. These optimizations can give gradually-typed languages performance comparable to state-of-the-art dynamic languages, so programmers can add types to their code without affecting their programs' performance.

Cite as

Richard Roberts, Stefan Marr, Michael Homer, and James Noble. Transient Typechecks Are (Almost) Free. In 33rd European Conference on Object-Oriented Programming (ECOOP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 134, pp. 5:1-5:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{roberts_et_al:LIPIcs.ECOOP.2019.5,
  author =	{Roberts, Richard and Marr, Stefan and Homer, Michael and Noble, James},
  title =	{{Transient Typechecks Are (Almost) Free}},
  booktitle =	{33rd European Conference on Object-Oriented Programming (ECOOP 2019)},
  pages =	{5:1--5:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-111-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{134},
  editor =	{Donaldson, Alastair F.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2019.5},
  URN =		{urn:nbn:de:0030-drops-107974},
  doi =		{10.4230/LIPIcs.ECOOP.2019.5},
  annote =	{Keywords: dynamic type checking, gradual types, optional types, Grace, Moth, object-oriented programming}
}
Document
Quantum Distinguishing Complexity, Zero-Error Algorithms, and Statistical Zero Knowledge

Authors: Shalev Ben-David and Robin Kothari

Published in: LIPIcs, Volume 135, 14th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2019)


Abstract
We define a new query measure we call quantum distinguishing complexity, denoted QD(f) for a Boolean function f. Unlike a quantum query algorithm, which must output a state close to |0> on a 0-input and a state close to |1> on a 1-input, a "quantum distinguishing algorithm" can output any state, as long as the output states for any 0-input and 1-input are distinguishable. Using this measure, we establish a new relationship in query complexity: For all total functions f, Q_0(f)=O~(Q(f)^5), where Q_0(f) and Q(f) denote the zero-error and bounded-error quantum query complexity of f respectively, improving on the previously known sixth power relationship. We also define a query measure based on quantum statistical zero-knowledge proofs, QSZK(f), which is at most Q(f). We show that QD(f) in fact lower bounds QSZK(f) and not just Q(f). QD(f) also upper bounds the (positive-weights) adversary bound, which yields the following relationships for all f: Q(f) >= QSZK(f) >= QD(f) = Omega(Adv(f)). This sheds some light on why the adversary bound proves suboptimal bounds for problems like Collision and Set Equality, which have low QSZK complexity. Lastly, we show implications for lifting theorems in communication complexity. We show that a general lifting theorem for either zero-error quantum query complexity or for QSZK would imply a general lifting theorem for bounded-error quantum query complexity.

Cite as

Shalev Ben-David and Robin Kothari. Quantum Distinguishing Complexity, Zero-Error Algorithms, and Statistical Zero Knowledge. In 14th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 135, pp. 2:1-2:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bendavid_et_al:LIPIcs.TQC.2019.2,
  author =	{Ben-David, Shalev and Kothari, Robin},
  title =	{{Quantum Distinguishing Complexity, Zero-Error Algorithms, and Statistical Zero Knowledge}},
  booktitle =	{14th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2019)},
  pages =	{2:1--2:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-112-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{135},
  editor =	{van Dam, Wim and Man\v{c}inska, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2019.2},
  URN =		{urn:nbn:de:0030-drops-103944},
  doi =		{10.4230/LIPIcs.TQC.2019.2},
  annote =	{Keywords: Quantum query complexity, quantum algorithms}
}
Document
Multimedia Exposition
Geometric Realizations of the 3D Associahedron (Multimedia Exposition)

Authors: Satyan L. Devadoss, Daniel D. Johnson, Justin Lee, and Jackson Warley

Published in: LIPIcs, Volume 99, 34th International Symposium on Computational Geometry (SoCG 2018)


Abstract
The associahedron is a convex polytope whose 1-skeleton is isomorphic to the flip graph of a convex polygon. There exists an elegant geometric realization of the associahedron, using the remarkable theory of secondary polytopes, based on the geometry of the underlying polygon. We present an interactive application that visualizes this correspondence in the 3D case.

Cite as

Satyan L. Devadoss, Daniel D. Johnson, Justin Lee, and Jackson Warley. Geometric Realizations of the 3D Associahedron (Multimedia Exposition). In 34th International Symposium on Computational Geometry (SoCG 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 99, pp. 75:1-75:4, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{devadoss_et_al:LIPIcs.SoCG.2018.75,
  author =	{Devadoss, Satyan L. and Johnson, Daniel D. and Lee, Justin and Warley, Jackson},
  title =	{{Geometric Realizations of the 3D Associahedron}},
  booktitle =	{34th International Symposium on Computational Geometry (SoCG 2018)},
  pages =	{75:1--75:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-066-8},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{99},
  editor =	{Speckmann, Bettina and T\'{o}th, Csaba D.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2018.75},
  URN =		{urn:nbn:de:0030-drops-87886},
  doi =		{10.4230/LIPIcs.SoCG.2018.75},
  annote =	{Keywords: associahedron, secondary polytope, realization}
}
Document
A Framework for Estimating Stream Expression Cardinalities

Authors: Anirban Dasgupta, Kevin J. Lang, Lee Rhodes, and Justin Thaler

Published in: LIPIcs, Volume 48, 19th International Conference on Database Theory (ICDT 2016)


Abstract
Given m distributed data streams A_1,..., A_m, we consider the problem of estimating the number of unique identifiers in streams defined by set expressions over A_1,..., A_m. We identify a broad class of algorithms for solving this problem, and show that the estimators output by any algorithm in this class are perfectly unbiased and satisfy strong variance bounds. Our analysis unifies and generalizes a variety of earlier results in the literature. To demonstrate its generality, we describe several novel sampling algorithms in our class, and show that they achieve a novel tradeoff between accuracy, space usage, update speed, and applicability.

Cite as

Anirban Dasgupta, Kevin J. Lang, Lee Rhodes, and Justin Thaler. A Framework for Estimating Stream Expression Cardinalities. In 19th International Conference on Database Theory (ICDT 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 48, pp. 6:1-6:17, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{dasgupta_et_al:LIPIcs.ICDT.2016.6,
  author =	{Dasgupta, Anirban and Lang, Kevin J. and Rhodes, Lee and Thaler, Justin},
  title =	{{A Framework for Estimating Stream Expression Cardinalities}},
  booktitle =	{19th International Conference on Database Theory (ICDT 2016)},
  pages =	{6:1--6:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-002-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{48},
  editor =	{Martens, Wim and Zeume, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICDT.2016.6},
  URN =		{urn:nbn:de:0030-drops-57754},
  doi =		{10.4230/LIPIcs.ICDT.2016.6},
  annote =	{Keywords: sketching, data stream algorithms, mergeability, distinct elements, set operations}
}
Document
Restricted Isometry Property for General p-Norms

Authors: Zeyuan Allen-Zhu, Rati Gelashvili, and Ilya Razenshteyn

Published in: LIPIcs, Volume 34, 31st International Symposium on Computational Geometry (SoCG 2015)


Abstract
The Restricted Isometry Property (RIP) is a fundamental property of a matrix which enables sparse recovery. Informally, an m x n matrix satisfies RIP of order k for the L_p norm, if |Ax|_p is approximately |x|_p for every x with at most k non-zero coordinates. For every 1 <= p < infty we obtain almost tight bounds on the minimum number of rows m necessary for the RIP property to hold. Prior to this work, only the cases p = 1, 1 + 1/log(k), and 2 were studied. Interestingly, our results show that the case p=2 is a "singularity" point: the optimal number of rows m is Theta(k^p) for all p in [1, infty)-{2}, as opposed to Theta(k) for k=2. We also obtain almost tight bounds for the column sparsity of RIP matrices and discuss implications of our results for the Stable Sparse Recovery problem.

Cite as

Zeyuan Allen-Zhu, Rati Gelashvili, and Ilya Razenshteyn. Restricted Isometry Property for General p-Norms. In 31st International Symposium on Computational Geometry (SoCG 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 34, pp. 451-460, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{allenzhu_et_al:LIPIcs.SOCG.2015.451,
  author =	{Allen-Zhu, Zeyuan and Gelashvili, Rati and Razenshteyn, Ilya},
  title =	{{Restricted Isometry Property for General p-Norms}},
  booktitle =	{31st International Symposium on Computational Geometry (SoCG 2015)},
  pages =	{451--460},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-83-5},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{34},
  editor =	{Arge, Lars and Pach, J\'{a}nos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SOCG.2015.451},
  URN =		{urn:nbn:de:0030-drops-51273},
  doi =		{10.4230/LIPIcs.SOCG.2015.451},
  annote =	{Keywords: compressive sensing, dimension reduction, linear algebra, high-dimensional geometry}
}
  • Refine by Author
  • 2 Holmgren, Justin
  • 1 Allen-Zhu, Zeyuan
  • 1 Alvaro, Peter
  • 1 Amin, Nada
  • 1 Ben-David, Shalev
  • Show More...

  • Refine by Classification
  • 3 Theory of computation → Quantum complexity theory
  • 2 Theory of computation → Computational complexity and cryptography
  • 1 Applied computing → Computational genomics
  • 1 Computing methodologies → Combinatorial algorithms
  • 1 Mathematics of computing → Functional analysis
  • Show More...

  • Refine by Keyword
  • 2 Quantum query complexity
  • 1 All Probabilities
  • 1 Blame tracking
  • 1 Bounded Degree
  • 1 Coding Theory
  • Show More...

  • Refine by Type
  • 19 document

  • Refine by Publication Year
  • 6 2019
  • 3 2015
  • 3 2021
  • 2 2014
  • 2 2022
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail