LIPIcs, Volume 185

12th Innovations in Theoretical Computer Science Conference (ITCS 2021)



Thumbnail PDF

Event

ITCS 2021, January 6-8, 2021, Virtual Conference

Editor

James R. Lee
  • University of Washington, Seattle, USA

Publication Details


Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 185, ITCS 2021, Complete Volume

Authors: James R. Lee


Abstract
LIPIcs, Volume 185, ITCS 2021, Complete Volume

Cite as

12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 1-1550, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Proceedings{lee:LIPIcs.ITCS.2021,
  title =	{{LIPIcs, Volume 185, ITCS 2021, Complete Volume}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{1--1550},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021},
  URN =		{urn:nbn:de:0030-drops-135381},
  doi =		{10.4230/LIPIcs.ITCS.2021},
  annote =	{Keywords: LIPIcs, Volume 185, ITCS 2021, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: James R. Lee


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 0:i-0:xiv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lee:LIPIcs.ITCS.2021.0,
  author =	{Lee, James R.},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{0:i--0:xiv},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.0},
  URN =		{urn:nbn:de:0030-drops-135397},
  doi =		{10.4230/LIPIcs.ITCS.2021.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
The Entropy of Lies: Playing Twenty Questions with a Liar

Authors: Yuval Dagan, Yuval Filmus, Daniel Kane, and Shay Moran


Abstract
"Twenty questions" is a guessing game played by two players: Bob thinks of an integer between 1 and n, and Alice’s goal is to recover it using a minimal number of Yes/No questions. Shannon’s entropy has a natural interpretation in this context. It characterizes the average number of questions used by an optimal strategy in the distributional variant of the game: let μ be a distribution over [n], then the average number of questions used by an optimal strategy that recovers x∼ μ is between H(μ) and H(μ)+1. We consider an extension of this game where at most k questions can be answered falsely. We extend the classical result by showing that an optimal strategy uses roughly H(μ) + k H_2(μ) questions, where H_2(μ) = ∑_x μ(x)log log 1/μ(x). This also generalizes a result by Rivest et al. (1980) for the uniform distribution. Moreover, we design near optimal strategies that only use comparison queries of the form "x ≤ c?" for c ∈ [n]. The usage of comparison queries lends itself naturally to the context of sorting, where we derive sorting algorithms in the presence of adversarial noise.

Cite as

Yuval Dagan, Yuval Filmus, Daniel Kane, and Shay Moran. The Entropy of Lies: Playing Twenty Questions with a Liar. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 1:1-1:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dagan_et_al:LIPIcs.ITCS.2021.1,
  author =	{Dagan, Yuval and Filmus, Yuval and Kane, Daniel and Moran, Shay},
  title =	{{The Entropy of Lies: Playing Twenty Questions with a Liar}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{1:1--1:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.1},
  URN =		{urn:nbn:de:0030-drops-135400},
  doi =		{10.4230/LIPIcs.ITCS.2021.1},
  annote =	{Keywords: entropy, twenty questions, algorithms, sorting}
}
Document
Comparing Computational Entropies Below Majority (Or: When Is the Dense Model Theorem False?)

Authors: Russell Impagliazzo and Sam McGuire


Abstract
Computational pseudorandomness studies the extent to which a random variable Z looks like the uniform distribution according to a class of tests ℱ. Computational entropy generalizes computational pseudorandomness by studying the extent which a random variable looks like a high entropy distribution. There are different formal definitions of computational entropy with different advantages for different applications. Because of this, it is of interest to understand when these definitions are equivalent. We consider three notions of computational entropy which are known to be equivalent when the test class ℱ is closed under taking majorities. This equivalence constitutes (essentially) the so-called dense model theorem of Green and Tao (and later made explicit by Tao-Zeigler, Reingold et al., and Gowers). The dense model theorem plays a key role in Green and Tao’s proof that the primes contain arbitrarily long arithmetic progressions and has since been connected to a surprisingly wide range of topics in mathematics and computer science, including cryptography, computational complexity, combinatorics and machine learning. We show that, in different situations where ℱ is not closed under majority, this equivalence fails. This in turn provides examples where the dense model theorem is false.

Cite as

Russell Impagliazzo and Sam McGuire. Comparing Computational Entropies Below Majority (Or: When Is the Dense Model Theorem False?). In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 2:1-2:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{impagliazzo_et_al:LIPIcs.ITCS.2021.2,
  author =	{Impagliazzo, Russell and McGuire, Sam},
  title =	{{Comparing Computational Entropies Below Majority (Or: When Is the Dense Model Theorem False?)}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{2:1--2:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.2},
  URN =		{urn:nbn:de:0030-drops-135417},
  doi =		{10.4230/LIPIcs.ITCS.2021.2},
  annote =	{Keywords: Computational entropy, dense model theorem, coin problem}
}
Document
Algorithmic Persuasion with Evidence

Authors: Martin Hoefer, Pasin Manurangsi, and Alexandros Psomas


Abstract
We consider a game of persuasion with evidence between a sender and a receiver. The sender has private information. By presenting evidence on the information, the sender wishes to persuade the receiver to take a single action (e.g., hire a job candidate, or convict a defendant). The sender’s utility depends solely on whether or not the receiver takes the action. The receiver’s utility depends on both the action as well as the sender’s private information. We study three natural variations. First, we consider sequential equilibria of the game without commitment power. Second, we consider a persuasion variant, where the sender commits to a signaling scheme and then the receiver, after seeing the evidence, takes the action or not. Third, we study a delegation variant, where the receiver first commits to taking the action if being presented certain evidence, and then the sender presents evidence to maximize the probability the action is taken. We study these variants through the computational lens, and give hardness results, optimal approximation algorithms, as well as polynomial-time algorithms for special cases. Among our results is an approximation algorithm that rounds a semidefinite program that might be of independent interest, since, to the best of our knowledge, it is the first such approximation algorithm for a natural problem in algorithmic economics.

Cite as

Martin Hoefer, Pasin Manurangsi, and Alexandros Psomas. Algorithmic Persuasion with Evidence. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 3:1-3:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{hoefer_et_al:LIPIcs.ITCS.2021.3,
  author =	{Hoefer, Martin and Manurangsi, Pasin and Psomas, Alexandros},
  title =	{{Algorithmic Persuasion with Evidence}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{3:1--3:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.3},
  URN =		{urn:nbn:de:0030-drops-135420},
  doi =		{10.4230/LIPIcs.ITCS.2021.3},
  annote =	{Keywords: Bayesian Persuasion, Semidefinite Programming, Approximation Algorithms}
}
Document
The Complexity of Finding Fair Independent Sets in Cycles

Authors: Ishay Haviv


Abstract
Let G be a cycle graph and let V₁,…,V_m be a partition of its vertex set into m sets. An independent set S of G is said to fairly represent the partition if |S ∩ V_i| ≥ 1/2⋅|V_i| - 1 for all i ∈ [m]. It is known that for every cycle and every partition of its vertex set, there exists an independent set that fairly represents the partition (Aharoni et al., A Journey through Discrete Math., 2017). We prove that the problem of finding such an independent set is PPA-complete. As an application, we show that the problem of finding a monochromatic edge in a Schrijver graph, given a succinct representation of a coloring that uses fewer colors than its chromatic number, is PPA-complete as well. The work is motivated by the computational aspects of the "cycle plus triangles" problem and of its extensions.

Cite as

Ishay Haviv. The Complexity of Finding Fair Independent Sets in Cycles. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 4:1-4:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{haviv:LIPIcs.ITCS.2021.4,
  author =	{Haviv, Ishay},
  title =	{{The Complexity of Finding Fair Independent Sets in Cycles}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{4:1--4:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.4},
  URN =		{urn:nbn:de:0030-drops-135431},
  doi =		{10.4230/LIPIcs.ITCS.2021.4},
  annote =	{Keywords: Fair independent sets in cycles, the complexity class \{PPA\}, Schrijver graphs}
}
Document
Sharp Threshold Rates for Random Codes

Authors: Venkatesan Guruswami, Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, and Mary Wootters


Abstract
Suppose that 𝒫 is a property that may be satisfied by a random code C ⊂ Σⁿ. For example, for some p ∈ (0,1), 𝒫 might be the property that there exist three elements of C that lie in some Hamming ball of radius pn. We say that R^* is the threshold rate for 𝒫 if a random code of rate R^* + ε is very likely to satisfy 𝒫, while a random code of rate R^* - ε is very unlikely to satisfy 𝒫. While random codes are well-studied in coding theory, even the threshold rates for relatively simple properties like the one above are not well understood. We characterize threshold rates for a rich class of properties. These properties, like the example above, are defined by the inclusion of specific sets of codewords which are also suitably "symmetric." For properties in this class, we show that the threshold rate is in fact equal to the lower bound that a simple first-moment calculation obtains. Our techniques not only pin down the threshold rate for the property 𝒫 above, they give sharp bounds on the threshold rate for list-recovery in several parameter regimes, as well as an efficient algorithm for estimating the threshold rates for list-recovery in general.

Cite as

Venkatesan Guruswami, Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, and Mary Wootters. Sharp Threshold Rates for Random Codes. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 5:1-5:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.ITCS.2021.5,
  author =	{Guruswami, Venkatesan and Mosheiff, Jonathan and Resch, Nicolas and Silas, Shashwat and Wootters, Mary},
  title =	{{Sharp Threshold Rates for Random Codes}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.5},
  URN =		{urn:nbn:de:0030-drops-135446},
  doi =		{10.4230/LIPIcs.ITCS.2021.5},
  annote =	{Keywords: Coding theory, Random codes, Sharp thresholds}
}
Document
Simple Heuristics Yield Provable Algorithms for Masked Low-Rank Approximation

Authors: Cameron Musco, Christopher Musco, and David P. Woodruff


Abstract
In the masked low-rank approximation problem, one is given data matrix A ∈ ℝ^{n × n} and binary mask matrix W ∈ {0,1}^{n × n}. The goal is to find a rank-k matrix L for which: cost(L) := ∑_{i=1}^n ∑_{j=1}^n W_{i,j} ⋅ (A_{i,j} - L_{i,j})² ≤ OPT + ε ‖A‖_F², where OPT = min_{rank-k L̂} cost(L̂) and ε is a given error parameter. Depending on the choice of W, the above problem captures factor analysis, low-rank plus diagonal decomposition, robust PCA, low-rank matrix completion, low-rank plus block matrix approximation, low-rank recovery from monotone missing data, and a number of other important problems. Many of these problems are NP-hard, and while algorithms with provable guarantees are known in some cases, they either 1) run in time n^Ω(k²/ε) or 2) make strong assumptions, for example, that A is incoherent or that the entries in W are chosen independently and uniformly at random. In this work, we show that a common polynomial time heuristic, which simply sets A to 0 where W is 0, and then finds a standard low-rank approximation, yields bicriteria approximation guarantees for this problem. In particular, for rank k' > k depending on the public coin partition number of W, the heuristic outputs rank-k' L with cost(L) ≤ OPT + ε ‖A‖_F². This partition number is in turn bounded by the randomized communication complexity of W, when interpreted as a two-player communication matrix. For many important cases, including all those listed above, this yields bicriteria approximation guarantees with rank k' = k ⋅ poly(log n/ε). Beyond this result, we show that different notions of communication complexity yield bicriteria algorithms for natural variants of masked low-rank approximation. For example, multi-player number-in-hand communication complexity connects to masked tensor decomposition and non-deterministic communication complexity to masked Boolean low-rank factorization.

Cite as

Cameron Musco, Christopher Musco, and David P. Woodruff. Simple Heuristics Yield Provable Algorithms for Masked Low-Rank Approximation. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 6:1-6:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{musco_et_al:LIPIcs.ITCS.2021.6,
  author =	{Musco, Cameron and Musco, Christopher and Woodruff, David P.},
  title =	{{Simple Heuristics Yield Provable Algorithms for Masked Low-Rank Approximation}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{6:1--6:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.6},
  URN =		{urn:nbn:de:0030-drops-135452},
  doi =		{10.4230/LIPIcs.ITCS.2021.6},
  annote =	{Keywords: low-rank approximation, communication complexity, weighted low-rank approximation, bicriteria approximation algorithms}
}
Document
Pseudorandom Generators for Unbounded-Width Permutation Branching Programs

Authors: William M. Hoza, Edward Pyne, and Salil Vadhan


Abstract
We prove that the Impagliazzo-Nisan-Wigderson [Impagliazzo et al., 1994] pseudorandom generator (PRG) fools ordered (read-once) permutation branching programs of unbounded width with a seed length of Õ(log d + log n ⋅ log(1/ε)), assuming the program has only one accepting vertex in the final layer. Here, n is the length of the program, d is the degree (equivalently, the alphabet size), and ε is the error of the PRG. In contrast, we show that a randomly chosen generator requires seed length Ω(n log d) to fool such unbounded-width programs. Thus, this is an unusual case where an explicit construction is "better than random." Except when the program’s width w is very small, this is an improvement over prior work. For example, when w = poly(n) and d = 2, the best prior PRG for permutation branching programs was simply Nisan’s PRG [Nisan, 1992], which fools general ordered branching programs with seed length O(log(wn/ε) log n). We prove a seed length lower bound of Ω̃(log d + log n ⋅ log(1/ε)) for fooling these unbounded-width programs, showing that our seed length is near-optimal. In fact, when ε ≤ 1/log n, our seed length is within a constant factor of optimal. Our analysis of the INW generator uses the connection between the PRG and the derandomized square of Rozenman and Vadhan [Rozenman and Vadhan, 2005] and the recent analysis of the latter in terms of unit-circle approximation by Ahmadinejad et al. [Ahmadinejad et al., 2020].

Cite as

William M. Hoza, Edward Pyne, and Salil Vadhan. Pseudorandom Generators for Unbounded-Width Permutation Branching Programs. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 7:1-7:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{hoza_et_al:LIPIcs.ITCS.2021.7,
  author =	{Hoza, William M. and Pyne, Edward and Vadhan, Salil},
  title =	{{Pseudorandom Generators for Unbounded-Width Permutation Branching Programs}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{7:1--7:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.7},
  URN =		{urn:nbn:de:0030-drops-135464},
  doi =		{10.4230/LIPIcs.ITCS.2021.7},
  annote =	{Keywords: Pseudorandom generators, permutation branching programs}
}
Document
Pipeline Interventions

Authors: Eshwar Ram Arunachaleswaran, Sampath Kannan, Aaron Roth, and Juba Ziani


Abstract
We introduce the pipeline intervention problem, defined by a layered directed acyclic graph and a set of stochastic matrices governing transitions between successive layers. The graph is a stylized model for how people from different populations are presented opportunities, eventually leading to some reward. In our model, individuals are born into an initial position (i.e. some node in the first layer of the graph) according to a fixed probability distribution, and then stochastically progress through the graph according to the transition matrices, until they reach a node in the final layer of the graph; each node in the final layer has a reward associated with it. The pipeline intervention problem asks how to best make costly changes to the transition matrices governing people’s stochastic transitions through the graph, subject to a budget constraint. We consider two objectives: social welfare maximization, and a fairness-motivated maximin objective that seeks to maximize the value to the population (starting node) with the least expected value. We consider two variants of the maximin objective that turn out to be distinct, depending on whether we demand a deterministic solution or allow randomization. For each objective, we give an efficient approximation algorithm (an additive FPTAS) for constant width networks. We also tightly characterize the "price of fairness" in our setting: the ratio between the highest achievable social welfare and the social welfare consistent with a maximin optimal solution. Finally we show that for polynomial width networks, even approximating the maximin objective to any constant factor is NP hard, even for networks with constant depth. This shows that the restriction on the width in our positive results is essential.

Cite as

Eshwar Ram Arunachaleswaran, Sampath Kannan, Aaron Roth, and Juba Ziani. Pipeline Interventions. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 8:1-8:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{arunachaleswaran_et_al:LIPIcs.ITCS.2021.8,
  author =	{Arunachaleswaran, Eshwar Ram and Kannan, Sampath and Roth, Aaron and Ziani, Juba},
  title =	{{Pipeline Interventions}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{8:1--8:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.8},
  URN =		{urn:nbn:de:0030-drops-135478},
  doi =		{10.4230/LIPIcs.ITCS.2021.8},
  annote =	{Keywords: Interventions for fairness, fairness in navigating life paths, social welfare, maximin welfare, budget-constrained optimization, hardness of approximation}
}
Document
A Polynomial Degree Bound on Equations for Non-Rigid Matrices and Small Linear Circuits

Authors: Mrinal Kumar and Ben Lee Volk


Abstract
We show that there is an equation of degree at most poly(n) for the (Zariski closure of the) set of the non-rigid matrices: that is, we show that for every large enough field 𝔽, there is a non-zero n²-variate polynomial P ∈ 𝔽[x_{1, 1}, …, x_{n, n}] of degree at most poly(n) such that every matrix M which can be written as a sum of a matrix of rank at most n/100 and a matrix of sparsity at most n²/100 satisfies P(M) = 0. This confirms a conjecture of Gesmundo, Hauenstein, Ikenmeyer and Landsberg [Fulvio Gesmundo et al., 2016] and improves the best upper bound known for this problem down from exp(n²) [Abhinav Kumar et al., 2014; Fulvio Gesmundo et al., 2016] to poly(n). We also show a similar polynomial degree bound for the (Zariski closure of the) set of all matrices M such that the linear transformation represented by M can be computed by an algebraic circuit with at most n²/200 edges (without any restriction on the depth). As far as we are aware, no such bound was known prior to this work when the depth of the circuits is unbounded. Our methods are elementary and short and rely on a polynomial map of Shpilka and Volkovich [Amir Shpilka and Ilya Volkovich, 2015] to construct low degree "universal" maps for non-rigid matrices and small linear circuits. Combining this construction with a simple dimension counting argument to show that any such polynomial map has a low degree annihilating polynomial completes the proof. As a corollary, we show that any derandomization of the polynomial identity testing problem will imply new circuit lower bounds. A similar (but incomparable) theorem was proved by Kabanets and Impagliazzo [Valentine Kabanets and Russell Impagliazzo, 2004].

Cite as

Mrinal Kumar and Ben Lee Volk. A Polynomial Degree Bound on Equations for Non-Rigid Matrices and Small Linear Circuits. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 9:1-9:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.ITCS.2021.9,
  author =	{Kumar, Mrinal and Volk, Ben Lee},
  title =	{{A Polynomial Degree Bound on Equations for Non-Rigid Matrices and Small Linear Circuits}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{9:1--9:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.9},
  URN =		{urn:nbn:de:0030-drops-135486},
  doi =		{10.4230/LIPIcs.ITCS.2021.9},
  annote =	{Keywords: Rigid Matrices, Linear Circuits, Degree Bounds, Circuit Lower Bounds}
}
Document
The Strongish Planted Clique Hypothesis and Its Consequences

Authors: Pasin Manurangsi, Aviad Rubinstein, and Tselil Schramm


Abstract
We formulate a new hardness assumption, the Strongish Planted Clique Hypothesis (SPCH), which postulates that any algorithm for planted clique must run in time n^Ω(log n) (so that the state-of-the-art running time of n^O(log n) is optimal up to a constant in the exponent). We provide two sets of applications of the new hypothesis. First, we show that SPCH implies (nearly) tight inapproximability results for the following well-studied problems in terms of the parameter k: Densest k-Subgraph, Smallest k-Edge Subgraph, Densest k-Subhypergraph, Steiner k-Forest, and Directed Steiner Network with k terminal pairs. For example, we show, under SPCH, that no polynomial time algorithm achieves o(k)-approximation for Densest k-Subgraph. This inapproximability ratio improves upon the previous best k^o(1) factor from (Chalermsook et al., FOCS 2017). Furthermore, our lower bounds hold even against fixed-parameter tractable algorithms with parameter k. Our second application focuses on the complexity of graph pattern detection. For both induced and non-induced graph pattern detection, we prove hardness results under SPCH, improving the running time lower bounds obtained by (Dalirrooyfard et al., STOC 2019) under the Exponential Time Hypothesis.

Cite as

Pasin Manurangsi, Aviad Rubinstein, and Tselil Schramm. The Strongish Planted Clique Hypothesis and Its Consequences. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 10:1-10:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{manurangsi_et_al:LIPIcs.ITCS.2021.10,
  author =	{Manurangsi, Pasin and Rubinstein, Aviad and Schramm, Tselil},
  title =	{{The Strongish Planted Clique Hypothesis and Its Consequences}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{10:1--10:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.10},
  URN =		{urn:nbn:de:0030-drops-135491},
  doi =		{10.4230/LIPIcs.ITCS.2021.10},
  annote =	{Keywords: Planted Clique, Densest k-Subgraph, Hardness of Approximation}
}
Document
Sample Efficient Identity Testing and Independence Testing of Quantum States

Authors: Nengkun Yu


Abstract
In this paper, we study the quantum identity testing problem, i.e., testing whether two given quantum states are identical, and quantum independence testing problem, i.e., testing whether a given multipartite quantum state is in tensor product form. For the quantum identity testing problem of 𝒟(ℂ^d) system, we provide a deterministic measurement scheme that uses 𝒪(d²/ε²) copies via independent measurements with d being the dimension of the state and ε being the additive error. For the independence testing problem 𝒟(ℂ^d₁⊗ℂ^{d₂}⊗⋯⊗ℂ^{d_m}) system, we show that the sample complexity is Θ̃((Π_{i = 1}^m d_i)/ε²) via collective measurements, and 𝒪((Π_{i = 1}^m d_i²)/ε²) via independent measurements. If randomized choice of independent measurements are allowed, the sample complexity is Θ(d^{3/2}/ε²) for the quantum identity testing problem, and Θ̃((Π_{i = 1}^m d_i^{3/2})/ε²) for the quantum independence testing problem.

Cite as

Nengkun Yu. Sample Efficient Identity Testing and Independence Testing of Quantum States. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 11:1-11:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{yu:LIPIcs.ITCS.2021.11,
  author =	{Yu, Nengkun},
  title =	{{Sample Efficient Identity Testing and Independence Testing of Quantum States}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{11:1--11:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.11},
  URN =		{urn:nbn:de:0030-drops-135504},
  doi =		{10.4230/LIPIcs.ITCS.2021.11},
  annote =	{Keywords: Quantum property testing}
}
Document
Understanding the Relative Strength of QBF CDCL Solvers and QBF Resolution

Authors: Olaf Beyersdorff and Benjamin Böhm


Abstract
QBF solvers implementing the QCDCL paradigm are powerful algorithms that successfully tackle many computationally complex applications. However, our theoretical understanding of the strength and limitations of these QCDCL solvers is very limited. In this paper we suggest to formally model QCDCL solvers as proof systems. We define different policies that can be used for decision heuristics and unit propagation and give rise to a number of sound and complete QBF proof systems (and hence new QCDCL algorithms). With respect to the standard policies used in practical QCDCL solving, we show that the corresponding QCDCL proof system is incomparable (via exponential separations) to Q-resolution, the classical QBF resolution system used in the literature. This is in stark contrast to the propositional setting where CDCL and resolution are known to be p-equivalent. This raises the question what formulas are hard for standard QCDCL, since Q-resolution lower bounds do not necessarily apply to QCDCL as we show here. In answer to this question we prove several lower bounds for QCDCL, including exponential lower bounds for a large class of random QBFs. We also introduce a strengthening of the decision heuristic used in classical QCDCL, which does not necessarily decide variables in order of the prefix, but still allows to learn asserting clauses. We show that with this decision policy, QCDCL can be exponentially faster on some formulas. We further exhibit a QCDCL proof system that is p-equivalent to Q-resolution. In comparison to classical QCDCL, this new QCDCL version adapts both decision and unit propagation policies.

Cite as

Olaf Beyersdorff and Benjamin Böhm. Understanding the Relative Strength of QBF CDCL Solvers and QBF Resolution. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 12:1-12:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{beyersdorff_et_al:LIPIcs.ITCS.2021.12,
  author =	{Beyersdorff, Olaf and B\"{o}hm, Benjamin},
  title =	{{Understanding the Relative Strength of QBF CDCL Solvers and QBF Resolution}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{12:1--12:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.12},
  URN =		{urn:nbn:de:0030-drops-135519},
  doi =		{10.4230/LIPIcs.ITCS.2021.12},
  annote =	{Keywords: CDCL, QBF, QCDCL, proof complexity, resolution, Q-resolution}
}
Document
The Quantum Supremacy Tsirelson Inequality

Authors: William Kretschmer


Abstract
A leading proposal for verifying near-term quantum supremacy experiments on noisy random quantum circuits is linear cross-entropy benchmarking. For a quantum circuit C on n qubits and a sample z ∈ {0,1}ⁿ, the benchmark involves computing |⟨z|C|0ⁿ⟩|², i.e. the probability of measuring z from the output distribution of C on the all zeros input. Under a strong conjecture about the classical hardness of estimating output probabilities of quantum circuits, no polynomial-time classical algorithm given C can output a string z such that |⟨z|C|0ⁿ⟩|² is substantially larger than 1/(2ⁿ) (Aaronson and Gunn, 2019). On the other hand, for a random quantum circuit C, sampling z from the output distribution of C achieves |⟨z|C|0ⁿ⟩|² ≈ 2/(2ⁿ) on average (Arute et al., 2019). In analogy with the Tsirelson inequality from quantum nonlocal correlations, we ask: can a polynomial-time quantum algorithm do substantially better than 2/(2ⁿ)? We study this question in the query (or black box) model, where the quantum algorithm is given oracle access to C. We show that, for any ε ≥ 1/poly(n), outputting a sample z such that |⟨z|C|0ⁿ⟩|² ≥ (2 + ε)/2ⁿ on average requires at least Ω((2^{n/4})/poly(n)) queries to C, but not more than O (2^{n/3}) queries to C, if C is either a Haar-random n-qubit unitary, or a canonical state preparation oracle for a Haar-random n-qubit state. We also show that when C samples from the Fourier distribution of a random Boolean function, the naive algorithm that samples from C is the optimal 1-query algorithm for maximizing |⟨z|C|0ⁿ⟩|² on average.

Cite as

William Kretschmer. The Quantum Supremacy Tsirelson Inequality. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 13:1-13:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kretschmer:LIPIcs.ITCS.2021.13,
  author =	{Kretschmer, William},
  title =	{{The Quantum Supremacy Tsirelson Inequality}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{13:1--13:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.13},
  URN =		{urn:nbn:de:0030-drops-135524},
  doi =		{10.4230/LIPIcs.ITCS.2021.13},
  annote =	{Keywords: quantum supremacy, quantum query complexity, random circuit sampling}
}
Document
Approximately Strategyproof Tournament Rules in the Probabilistic Setting

Authors: Kimberly Ding and S. Matthew Weinberg


Abstract
We consider the manipulability of tournament rules which map the results of binom(n,2) pairwise matches and select a winner. Prior work designs simple tournament rules such that no pair of teams can manipulate the outcome of their match to improve their probability of winning by more than 1/3, and this is the best possible among any Condorcet-consistent tournament rule (which selects an undefeated team whenever one exists) [Jon Schneider et al., 2017; Ariel Schvartzman et al., 2020]. These lower bounds require the manipulators to know precisely the outcome of all future matches. We take a beyond worst-case view and instead consider tournaments which are "close to uniform": the outcome of all matches are independent, and no team is believed to win any match with probability exceeding 1/2+ε. We show that Randomized Single Elimination Bracket [Jon Schneider et al., 2017] and a new tournament rule we term Randomized Death Match have the property that no pair of teams can manipulate the outcome of their match to improve their probability of winning by more than ε/3 + 2ε²/3, for all ε, and this is the best possible among any Condorcet-consistent tournament rule. Our main technical contribution is a recursive framework to analyze the manipulability of certain forms of tournament rules. In addition to our main results, this view helps streamline previous analysis of Randomized Single Elimination Bracket, and may be of independent interest.

Cite as

Kimberly Ding and S. Matthew Weinberg. Approximately Strategyproof Tournament Rules in the Probabilistic Setting. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 14:1-14:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ding_et_al:LIPIcs.ITCS.2021.14,
  author =	{Ding, Kimberly and Weinberg, S. Matthew},
  title =	{{Approximately Strategyproof Tournament Rules in the Probabilistic Setting}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{14:1--14:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.14},
  URN =		{urn:nbn:de:0030-drops-135532},
  doi =		{10.4230/LIPIcs.ITCS.2021.14},
  annote =	{Keywords: Tournaments, Incentive Compatibility, Recursive Analysis, Social Choice Theory}
}
Document
Even the Easiest(?) Graph Coloring Problem Is Not Easy in Streaming!

Authors: Anup Bhattacharya, Arijit Bishnu, Gopinath Mishra, and Anannya Upasana


Abstract
We study a graph coloring problem that is otherwise easy in the RAM model but becomes quite non-trivial in the one-pass streaming model. In contrast to previous graph coloring problems in streaming that try to find an assignment of colors to vertices, our main work is on estimating the number of conflicting or monochromatic edges given a coloring function that is streaming along with the graph; we call the problem Conflict-Est. The coloring function on a vertex can be read or accessed only when the vertex is revealed in the stream. If we need the color on a vertex that has streamed past, then that color, along with its vertex, has to be stored explicitly. We provide algorithms for a graph that is streaming in different variants of the vertex arrival in one-pass streaming model, viz. the Vertex Arrival (VA), Vertex Arrival With Degree Oracle (VAdeg), Vertex Arrival in Random Order (VArand) models, with special focus on the random order model. We also provide matching lower bounds for most of the cases. The mainstay of our work is in showing that the properties of a random order stream can be exploited to design efficient streaming algorithms for estimating the number of monochromatic edges. We have also obtained a lower bound, though not matching the upper bound, for the random order model. Among all the three models vis-a-vis this problem, we can show a clear separation of power in favor of the VArand model.

Cite as

Anup Bhattacharya, Arijit Bishnu, Gopinath Mishra, and Anannya Upasana. Even the Easiest(?) Graph Coloring Problem Is Not Easy in Streaming!. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 15:1-15:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bhattacharya_et_al:LIPIcs.ITCS.2021.15,
  author =	{Bhattacharya, Anup and Bishnu, Arijit and Mishra, Gopinath and Upasana, Anannya},
  title =	{{Even the Easiest(?) Graph Coloring Problem Is Not Easy in Streaming!}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{15:1--15:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.15},
  URN =		{urn:nbn:de:0030-drops-135544},
  doi =		{10.4230/LIPIcs.ITCS.2021.15},
  annote =	{Keywords: Streaming, random ordering, graph coloring, estimation, lower bounds}
}
Document
The Variable-Processor Cup Game

Authors: William Kuszmaul and Alek Westover


Abstract
The problem of scheduling tasks on p processors so that no task ever gets too far behind is often described as a game with cups and water. In the p-processor cup game on n cups, there are two players, a filler and an emptier, that take turns adding and removing water from a set of n cups. In each turn, the filler adds p units of water to the cups, placing at most 1 unit of water in each cup, and then the emptier selects p cups to remove up to 1 unit of water from. The emptier’s goal is to minimize the backlog, which is the height of the fullest cup. The p-processor cup game has been studied in many different settings, dating back to the late 1960’s. All of the past work shares one common assumption: that p is fixed. This paper initiates the study of what happens when the number of available processors p varies over time, resulting in what we call the variable-processor cup game. Remarkably, the optimal bounds for the variable-processor cup game differ dramatically from its classical counterpart. Whereas the p-processor cup has optimal backlog Θ(log n), the variable-processor game has optimal backlog Θ(n). Moreover, there is an efficient filling strategy that yields backlog Ω(n^{1 - ε}) in quasi-polynomial time against any deterministic emptying strategy. We additionally show that straightforward uses of randomization cannot be used to help the emptier. In particular, for any positive constant Δ, and any Δ-greedy-like randomized emptying algorithm 𝒜, there is a filling strategy that achieves backlog Ω(n^{1 - ε}) against 𝒜 in quasi-polynomial time.

Cite as

William Kuszmaul and Alek Westover. The Variable-Processor Cup Game. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 16:1-16:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kuszmaul_et_al:LIPIcs.ITCS.2021.16,
  author =	{Kuszmaul, William and Westover, Alek},
  title =	{{The Variable-Processor Cup Game}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{16:1--16:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.16},
  URN =		{urn:nbn:de:0030-drops-135559},
  doi =		{10.4230/LIPIcs.ITCS.2021.16},
  annote =	{Keywords: scheduling, cup games, online algorithms, lower bounds}
}
Document
Comparison Graphs: A Unified Method for Uniformity Testing

Authors: Uri Meir


Abstract
Distribution testing can be described as follows: q samples are being drawn from some unknown distribution P over a known domain [n]. After the sampling process, a decision must be made about whether P holds some property, or is far from it. The most studied problem in the field is arguably uniformity testing, where one needs to distinguish the case that P is uniform over [n] from the case that P is ε-far from being uniform (in 𝓁₁). It is known that for this task Θ(√n/ε²) samples are necessary and sufficient. This problem was recently considered in various restricted models that pose, for example, communication or memory constraints. In more than one occasion, the known optimal solution boils down to counting collisions among the drawn samples (each two samples that have the same value add one to the count). This idea dates back to the first uniformity tester, and was coined the name "collision-based tester". In this paper, we introduce the notion of comparison graphs and use it to formally define a generalized collision-based tester. Roughly speaking, the edges of the graph indicate the tester which pairs of samples should be compared (that is, the original tester is induced by a clique, where all pairs are being compared). We prove a structural theorem that gives a sufficient condition for a comparison graph to induce a good uniformity tester. As an application, we develop a generic method to test uniformity, and devise nearly-optimal uniformity testers under various computational constraints. We improve and simplify a few known results, and introduce a new constrained model in which the method also produces an efficient tester. The idea behind our method is to translate computational constraints of a certain model to ones on the comparison graph, which paves the way to finding a good graph: a set of comparisons allowed by the model that suffice to test for uniformity. We believe that in future consideration of uniformity testing in new models, our method can be used to obtain efficient testers with minimal effort.

Cite as

Uri Meir. Comparison Graphs: A Unified Method for Uniformity Testing. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 17:1-17:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{meir:LIPIcs.ITCS.2021.17,
  author =	{Meir, Uri},
  title =	{{Comparison Graphs: A Unified Method for Uniformity Testing}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{17:1--17:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.17},
  URN =		{urn:nbn:de:0030-drops-135560},
  doi =		{10.4230/LIPIcs.ITCS.2021.17},
  annote =	{Keywords: Distribution Testing, Uniformity Testing, Distributed Algorithms, Streaming Algorithms, Comparison Graphs}
}
Document
Circular Trace Reconstruction

Authors: Shyam Narayanan and Michael Ren


Abstract
Trace reconstruction is the problem of learning an unknown string x from independent traces of x, where traces are generated by independently deleting each bit of x with some deletion probability q. In this paper, we initiate the study of Circular trace reconstruction, where the unknown string x is circular and traces are now rotated by a random cyclic shift. Trace reconstruction is related to many computational biology problems studying DNA, which is a primary motivation for this problem as well, as many types of DNA are known to be circular. Our main results are as follows. First, we prove that we can reconstruct arbitrary circular strings of length n using exp(Õ(n^{1/3})) traces for any constant deletion probability q, as long as n is prime or the product of two primes. For n of this form, this nearly matches what was the best known bound of exp(O(n^{1/3})) for standard trace reconstruction when this paper was initially released. We note, however, that Chase very recently improved the standard trace reconstruction bound to exp(Õ(n^{1/5})). Next, we prove that we can reconstruct random circular strings with high probability using n^O(1) traces for any constant deletion probability q. Finally, we prove a lower bound of Ω̃(n³) traces for arbitrary circular strings, which is greater than the best known lower bound of Ω̃(n^{3/2}) in standard trace reconstruction.

Cite as

Shyam Narayanan and Michael Ren. Circular Trace Reconstruction. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{narayanan_et_al:LIPIcs.ITCS.2021.18,
  author =	{Narayanan, Shyam and Ren, Michael},
  title =	{{Circular Trace Reconstruction}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.18},
  URN =		{urn:nbn:de:0030-drops-135573},
  doi =		{10.4230/LIPIcs.ITCS.2021.18},
  annote =	{Keywords: Trace Reconstruction, Deletion Channel, Cyclotomic Integers}
}
Document
Self-Testing of a Single Quantum Device Under Computational Assumptions

Authors: Tony Metger and Thomas Vidick


Abstract
Self-testing is a method to characterise an arbitrary quantum system based only on its classical input-output correlations, and plays an important role in device-independent quantum information processing as well as quantum complexity theory. Prior works on self-testing require the assumption that the system’s state is shared among multiple parties that only perform local measurements and cannot communicate. Here, we replace the setting of multiple non-communicating parties, which is difficult to enforce in practice, by a single computationally bounded party. Specifically, we construct a protocol that allows a classical verifier to robustly certify that a single computationally bounded quantum device must have prepared a Bell pair and performed single-qubit measurements on it, up to a change of basis applied to both the device’s state and measurements. This means that under computational assumptions, the verifier is able to certify the presence of entanglement, a property usually closely associated with two separated subsystems, inside a single quantum device. To achieve this, we build on techniques first introduced by Brakerski et al. (2018) and Mahadev (2018) which allow a classical verifier to constrain the actions of a quantum device assuming the device does not break post-quantum cryptography.

Cite as

Tony Metger and Thomas Vidick. Self-Testing of a Single Quantum Device Under Computational Assumptions. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 19:1-19:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{metger_et_al:LIPIcs.ITCS.2021.19,
  author =	{Metger, Tony and Vidick, Thomas},
  title =	{{Self-Testing of a Single Quantum Device Under Computational Assumptions}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{19:1--19:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.19},
  URN =		{urn:nbn:de:0030-drops-135581},
  doi =		{10.4230/LIPIcs.ITCS.2021.19},
  annote =	{Keywords: Quantum computing, quantum cryptography, device-independence, self-testing, post-quantum cryptography}
}
Document
Polynomial-Time Trace Reconstruction in the Low Deletion Rate Regime

Authors: Xi Chen, Anindya De, Chin Ho Lee, Rocco A. Servedio, and Sandip Sinha


Abstract
In the trace reconstruction problem, an unknown source string x ∈ {0,1}ⁿ is transmitted through a probabilistic deletion channel which independently deletes each bit with some fixed probability δ and concatenates the surviving bits, resulting in a trace of x. The problem is to reconstruct x given access to independent traces. Trace reconstruction of arbitrary (worst-case) strings is a challenging problem, with the current state of the art for poly(n)-time algorithms being the 2004 algorithm of Batu et al. [T. Batu et al., 2004]. This algorithm can reconstruct an arbitrary source string x ∈ {0,1}ⁿ in poly(n) time provided that the deletion rate δ satisfies δ ≤ n^{-(1/2 + ε)} for some ε > 0. In this work we improve on the result of [T. Batu et al., 2004] by giving a poly(n)-time algorithm for trace reconstruction for any deletion rate δ ≤ n^{-(1/3 + ε)}. Our algorithm works by alternating an alignment-based procedure, which we show effectively reconstructs portions of the source string that are not "highly repetitive", with a novel procedure that efficiently determines the length of highly repetitive subwords of the source string.

Cite as

Xi Chen, Anindya De, Chin Ho Lee, Rocco A. Servedio, and Sandip Sinha. Polynomial-Time Trace Reconstruction in the Low Deletion Rate Regime. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 20:1-20:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2021.20,
  author =	{Chen, Xi and De, Anindya and Lee, Chin Ho and Servedio, Rocco A. and Sinha, Sandip},
  title =	{{Polynomial-Time Trace Reconstruction in the Low Deletion Rate Regime}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{20:1--20:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.20},
  URN =		{urn:nbn:de:0030-drops-135595},
  doi =		{10.4230/LIPIcs.ITCS.2021.20},
  annote =	{Keywords: trace reconstruction}
}
Document
Metrical Service Systems with Transformations

Authors: Sébastien Bubeck, Niv Buchbinder, Christian Coester, and Mark Sellke


Abstract
We consider a generalization of the fundamental online metrical service systems (MSS) problem where the feasible region can be transformed between requests. In this problem, which we call T-MSS, an algorithm maintains a point in a metric space and has to serve a sequence of requests. Each request is a map (transformation) f_t: A_t → B_t between subsets A_t and B_t of the metric space. To serve it, the algorithm has to go to a point a_t ∈ A_t, paying the distance from its previous position. Then, the transformation is applied, modifying the algorithm’s state to f_t(a_t). Such transformations can model, e.g., changes to the environment that are outside of an algorithm’s control, and we therefore do not charge any additional cost to the algorithm when the transformation is applied. The transformations also allow to model requests occurring in the k-taxi problem. We show that for α-Lipschitz transformations, the competitive ratio is Θ(α)^{n-2} on n-point metrics. Here, the upper bound is achieved by a deterministic algorithm and the lower bound holds even for randomized algorithms. For the k-taxi problem, we prove a competitive ratio of Õ((nlog k)²). For chasing convex bodies, we show that even with contracting transformations no competitive algorithm exists. The problem T-MSS has a striking connection to the following deep mathematical question: Given a finite metric space M, what is the required cardinality of an extension M̂ ⊇ M where each partial isometry on M extends to an automorphism? We give partial answers for special cases.

Cite as

Sébastien Bubeck, Niv Buchbinder, Christian Coester, and Mark Sellke. Metrical Service Systems with Transformations. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 21:1-21:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bubeck_et_al:LIPIcs.ITCS.2021.21,
  author =	{Bubeck, S\'{e}bastien and Buchbinder, Niv and Coester, Christian and Sellke, Mark},
  title =	{{Metrical Service Systems with Transformations}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{21:1--21:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.21},
  URN =		{urn:nbn:de:0030-drops-135608},
  doi =		{10.4230/LIPIcs.ITCS.2021.21},
  annote =	{Keywords: Online algorithms, competitive analysis, metrical task systems, k-taxi}
}
Document
Tight Hardness Results for Training Depth-2 ReLU Networks

Authors: Surbhi Goel, Adam Klivans, Pasin Manurangsi, and Daniel Reichman


Abstract
We prove several hardness results for training depth-2 neural networks with the ReLU activation function; these networks are simply weighted sums (that may include negative coefficients) of ReLUs. Our goal is to output a depth-2 neural network that minimizes the square loss with respect to a given training set. We prove that this problem is NP-hard already for a network with a single ReLU. We also prove NP-hardness for outputting a weighted sum of k ReLUs minimizing the squared error (for k > 1) even in the realizable setting (i.e., when the labels are consistent with an unknown depth-2 ReLU network). We are also able to obtain lower bounds on the running time in terms of the desired additive error ε. To obtain our lower bounds, we use the Gap Exponential Time Hypothesis (Gap-ETH) as well as a new hypothesis regarding the hardness of approximating the well known Densest κ-Subgraph problem in subexponential time (these hypotheses are used separately in proving different lower bounds). For example, we prove that under reasonable hardness assumptions, any proper learning algorithm for finding the best fitting ReLU must run in time exponential in 1/ε². Together with a previous work regarding improperly learning a ReLU [Surbhi Goel et al., 2017], this implies the first separation between proper and improper algorithms for learning a ReLU. We also study the problem of properly learning a depth-2 network of ReLUs with bounded weights giving new (worst-case) upper bounds on the running time needed to learn such networks both in the realizable and agnostic settings. Our upper bounds on the running time essentially matches our lower bounds in terms of the dependency on ε.

Cite as

Surbhi Goel, Adam Klivans, Pasin Manurangsi, and Daniel Reichman. Tight Hardness Results for Training Depth-2 ReLU Networks. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 22:1-22:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{goel_et_al:LIPIcs.ITCS.2021.22,
  author =	{Goel, Surbhi and Klivans, Adam and Manurangsi, Pasin and Reichman, Daniel},
  title =	{{Tight Hardness Results for Training Depth-2 ReLU Networks}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{22:1--22:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.22},
  URN =		{urn:nbn:de:0030-drops-135611},
  doi =		{10.4230/LIPIcs.ITCS.2021.22},
  annote =	{Keywords: ReLU, Learning Algorithm, Running Time Lower Bound}
}
Document
A Largish Sum-Of-Squares Implies Circuit Hardness and Derandomization

Authors: Pranjal Dutta, Nitin Saxena, and Thomas Thierauf


Abstract
For a polynomial f, we study the sum of squares representation (SOS), i.e. f = ∑_{i ∈ [s]} c_i f_i² , where c_i are field elements and the f_i’s are polynomials. The size of the representation is the number of monomials that appear across the f_i’s. Its minimum is the support-sum S(f) of f. For simplicity of exposition, we consider univariate f. A trivial lower bound for the support-sum of, a full-support univariate polynomial, f of degree d is S(f) ≥ d^{0.5}. We show that the existence of an explicit polynomial f with support-sum just slightly larger than the trivial bound, that is, S(f) ≥ d^{0.5+ε(d)}, for a sub-constant function ε(d) > ω(√{log log d/log d}), implies that VP ≠ VNP. The latter is a major open problem in algebraic complexity. A further consequence is that blackbox-PIT is in SUBEXP. Note that a random polynomial fulfills the condition, as there we have S(f) = Θ(d). We also consider the sum-of-cubes representation (SOC) of polynomials. In a similar way, we show that here, an explicit hard polynomial even implies that blackbox-PIT is in P.

Cite as

Pranjal Dutta, Nitin Saxena, and Thomas Thierauf. A Largish Sum-Of-Squares Implies Circuit Hardness and Derandomization. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 23:1-23:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dutta_et_al:LIPIcs.ITCS.2021.23,
  author =	{Dutta, Pranjal and Saxena, Nitin and Thierauf, Thomas},
  title =	{{A Largish Sum-Of-Squares Implies Circuit Hardness and Derandomization}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{23:1--23:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.23},
  URN =		{urn:nbn:de:0030-drops-135629},
  doi =		{10.4230/LIPIcs.ITCS.2021.23},
  annote =	{Keywords: VP, VNP, hitting set, circuit, polynomial, sparsity, SOS, SOC, PIT, lower bound}
}
Document
Circuit Depth Reductions

Authors: Alexander Golovnev, Alexander S. Kulikov, and R. Ryan Williams


Abstract
The best known size lower bounds against unrestricted circuits have remained around 3n for several decades. Moreover, the only known technique for proving lower bounds in this model, gate elimination, is inherently limited to proving lower bounds of less than 5n. In this work, we propose a non-gate-elimination approach for obtaining circuit lower bounds, via certain depth-three lower bounds. We prove that every (unbounded-depth) circuit of size s can be expressed as an OR of 2^{s/3.9} 16-CNFs. For DeMorgan formulas, the best known size lower bounds have been stuck at around n^{3-o(1)} for decades. Under a plausible hypothesis about probabilistic polynomials, we show that n^{4-ε}-size DeMorgan formulas have 2^{n^{1-Ω(ε)}}-size depth-3 circuits which are approximate sums of n^{1-Ω(ε)}-degree polynomials over F₂. While these structural results do not immediately lead to new lower bounds, they do suggest new avenues of attack on these longstanding lower bound problems. Our results complement the classical depth-3 reduction results of Valiant, which show that logarithmic-depth circuits of linear size can be computed by an OR of 2^{ε n} n^δ-CNFs, and slightly stronger results for series-parallel circuits. It is known that no purely graph-theoretic reduction could yield interesting depth-3 circuits from circuits of super-logarithmic depth. We overcome this limitation (for small-size circuits) by taking into account both the graph-theoretic and functional properties of circuits and formulas. We show that improvements of the following pseudorandom constructions imply super-linear circuit lower bounds for log-depth circuits via Valiant’s reduction: dispersers for varieties, correlation with constant degree polynomials, matrix rigidity, and hardness for depth-3 circuits with constant bottom fan-in. On the other hand, our depth reductions show that even modest improvements of the known constructions give elementary proofs of improved (but still linear) circuit lower bounds.

Cite as

Alexander Golovnev, Alexander S. Kulikov, and R. Ryan Williams. Circuit Depth Reductions. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 24:1-24:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{golovnev_et_al:LIPIcs.ITCS.2021.24,
  author =	{Golovnev, Alexander and Kulikov, Alexander S. and Williams, R. Ryan},
  title =	{{Circuit Depth Reductions}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{24:1--24:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.24},
  URN =		{urn:nbn:de:0030-drops-135633},
  doi =		{10.4230/LIPIcs.ITCS.2021.24},
  annote =	{Keywords: Circuit complexity, formula complexity, pseudorandomness, matrix rigidity}
}
Document
Dynamic Inference in Probabilistic Graphical Models

Authors: Weiming Feng, Kun He, Xiaoming Sun, and Yitong Yin


Abstract
Probabilistic graphical models, such as Markov random fields (MRFs), are useful for describing high-dimensional distributions in terms of local dependence structures. The {probabilistic inference} is a fundamental problem related to graphical models, and sampling is a main approach for the problem. In this paper, we study probabilistic inference problems when the graphical model itself is changing dynamically with time. Such dynamic inference problems arise naturally in today’s application, e.g. multivariate time-series data analysis and practical learning procedures. We give a dynamic algorithm for sampling-based probabilistic inferences in MRFs, where each dynamic update can change the underlying graph and all parameters of the MRF simultaneously, as long as the total amount of changes is bounded. More precisely, suppose that the MRF has n variables and polylogarithmic-bounded maximum degree, and N(n) independent samples are sufficient for the inference for a polynomial function N(⋅). Our algorithm dynamically maintains an answer to the inference problem using Õ(n N(n)) space cost, and Õ(N(n) + n) incremental time cost upon each update to the MRF, as long as the Dobrushin-Shlosman condition is satisfied by the MRFs. This well-known condition has long been used for guaranteeing the efficiency of Markov chain Monte Carlo (MCMC) sampling in the traditional static setting. Compared to the static case, which requires Ω(n N(n)) time cost for redrawing all N(n) samples whenever the MRF changes, our dynamic algorithm gives a 𝛺^~(min{n, N(n)})-factor speedup. Our approach relies on a novel dynamic sampling technique, which transforms local Markov chains (a.k.a. single-site dynamics) to dynamic sampling algorithms, and an "algorithmic Lipschitz" condition that we establish for sampling from graphical models, namely, when the MRF changes by a small difference, samples can be modified to reflect the new distribution, with cost proportional to the difference on MRF.

Cite as

Weiming Feng, Kun He, Xiaoming Sun, and Yitong Yin. Dynamic Inference in Probabilistic Graphical Models. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 25:1-25:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{feng_et_al:LIPIcs.ITCS.2021.25,
  author =	{Feng, Weiming and He, Kun and Sun, Xiaoming and Yin, Yitong},
  title =	{{Dynamic Inference in Probabilistic Graphical Models}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{25:1--25:20},
  series =	{Leibniz International