Found 9 Possible Name Variants:

Document

**Published in:** LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)

A fundamental problem in circuit complexity is to find explicit functions that require large depth to compute. When considering the natural DeMorgan basis of {OR,AND}, where negations incur no cost, the best known depth lower bounds for an explicit function in NP have the form (3-o(1))log₂ n, established by Håstad (building on others) in the early 1990s. We make progress on the problem of improving this factor of 3, in two different ways:
- We consider an "algorithmic method" approach to proving stronger depth lower bounds for non-uniform circuits in the DeMorgan basis. We show that slightly faster algorithms (than what is known) for counting the number of satisfying assignments on subcubic-size DeMorgan formulas would imply supercubic-size DeMorgan formula lower bounds, implying that the depth must be at least (3+ε)log₂ n for some ε > 0. For example, if #SAT on formulas of size n^{2+2ε} can be solved in 2^{n - n^{1-ε}log^k n} time for some ε > 0 and a sufficiently large constant k, then there is a function computable in 2^{O(n)} time with a SAT oracle which does not have n^{3+ε}-size formulas. In fact, the #SAT algorithm only has to work on formulas that are a conjunction of n^{1-ε} subformulas, each of which is n^{1+3ε} size, in order to obtain the supercubic lower bound. As a proof of concept, we show that our new algorithms-to-lower-bounds connection can be applied to prove new lower bounds for "hybrid" DeMorgan formula models which compute interesting functions at their leaves.
- Turning to the {NAND} basis, we establish a greater-than-(3 log₂ n) depth lower bound against uniform circuits solving the SAT problem, using an extension of the "indirect diagonalization" method for NAND formulas. Note that circuits over the NAND basis are a special case of circuits over the DeMorgan basis; however, hard functions such as Andreev’s function (known to require depth (3-o(1))log₂ n in the DeMorgan basis) can still be computed with NAND circuits of depth (3+o(1))log₂ n. Our results imply that SAT requires polylogtime-uniform NAND circuits of depth at least 3.603 log₂ n.

Gabriel Bathie and R. Ryan Williams. Towards Stronger Depth Lower Bounds. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 10:1-10:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)

Copy BibTex To Clipboard

@InProceedings{bathie_et_al:LIPIcs.ITCS.2024.10, author = {Bathie, Gabriel and Williams, R. Ryan}, title = {{Towards Stronger Depth Lower Bounds}}, booktitle = {15th Innovations in Theoretical Computer Science Conference (ITCS 2024)}, pages = {10:1--10:24}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-309-6}, ISSN = {1868-8969}, year = {2024}, volume = {287}, editor = {Guruswami, Venkatesan}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.10}, URN = {urn:nbn:de:0030-drops-195388}, doi = {10.4230/LIPIcs.ITCS.2024.10}, annote = {Keywords: DeMorgan formulas, depth complexity, circuit complexity, lower bounds, #SAT, NAND gates, SAT} }

Document

**Published in:** LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)

Given the need for ever higher performance, and the failure of CPUs to keep providing single-threaded performance gains, engineers are increasingly turning to highly-parallel custom VLSI chips to implement expensive computations. In VLSI design, the gates and wires of a logical circuit are placed on a 2-dimensional chip with a small number of layers. Traditional VLSI models use gate delay to measure the time complexity of the chip, ignoring the lengths of wires. However, as technology has advanced, wire delay is no longer negligible; it has become an important measure in the design of VLSI chips [Markov, Nature (2014)].
Motivated by this situation, we define and study a model for VLSI chips, called wire-delay VLSI, which takes wire delay into account, going beyond an earlier model of Chazelle and Monier [JACM 1985].
- We prove nearly tight upper bounds and lower bounds (up to logarithmic factors) on the time delay of this chip model for several basic problems. For example, And, Or and Parity require Θ(n^{1/3}) delay, while Addition and Multiplication require ̃ Θ(n^{1/2}) delay, and Triangle Detection on (dense) n-node graphs requires ̃ Θ(n) delay. Interestingly, when we allow input bits to be read twice, the delay for Addition can be improved to Θ(n^{1/3}).
- We also show that proving significantly higher lower bounds in our wire-delay VLSI model would imply breakthrough results in circuit lower bounds. Motivated by this barrier, we also study conditional lower bounds on the delay of chips based on the Orthogonal Vectors Hypothesis from fine-grained complexity.

Ce Jin, R. Ryan Williams, and Nathaniel Young. A VLSI Circuit Model Accounting for Wire Delay. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 66:1-66:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)

Copy BibTex To Clipboard

@InProceedings{jin_et_al:LIPIcs.ITCS.2024.66, author = {Jin, Ce and Williams, R. Ryan and Young, Nathaniel}, title = {{A VLSI Circuit Model Accounting for Wire Delay}}, booktitle = {15th Innovations in Theoretical Computer Science Conference (ITCS 2024)}, pages = {66:1--66:22}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-309-6}, ISSN = {1868-8969}, year = {2024}, volume = {287}, editor = {Guruswami, Venkatesan}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.66}, URN = {urn:nbn:de:0030-drops-195949}, doi = {10.4230/LIPIcs.ITCS.2024.66}, annote = {Keywords: circuit complexity, systolic arrays, VLSI, wire delay} }

Document

**Published in:** LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)

What is the power of constant-depth circuits with MOD_m gates, that can count modulo m? Can they efficiently compute MAJORITY and other symmetric functions? When m is a constant prime power, the answer is well understood. In this regime, Razborov and Smolensky proved in the 1980s that MAJORITY and MOD_m require super-polynomial-size MOD_q circuits, where q is any prime power not dividing m. However, relatively little is known about the power of MOD_m gates when m is not a prime power. For example, it is still open whether every problem decidable in exponential time can be computed by depth-3 circuits of polynomial-size and only MOD_6 gates.
In this paper, we shed some light on the difficulty of proving lower bounds for MOD_m circuits, by giving new upper bounds. We show how to construct MOD_m circuits computing symmetric functions with non-prime power m, with size-depth tradeoffs that beat the longstanding lower bounds for AC^0[m] circuits when m is a prime power. Furthermore, we observe that our size-depth tradeoff circuits have essentially optimal dependence on m and d in the exponent, under a natural circuit complexity hypothesis.
For example, we show that for every ε > 0, every symmetric function can be computed using MOD_m circuits of depth 3 and 2^{n^ε} size, for a constant m depending only on ε > 0. In other words, depth-3 CC^0 circuits can compute any symmetric function in subexponential size. This demonstrates a significant difference in the power of depth-3 CC^0 circuits, compared to other models: for certain symmetric functions, depth-3 AC^0 circuits require 2^{Ω(√n)} size [Håstad 1986], and depth-3 AC^0[p^k] circuits (for fixed prime power p^k) require 2^{Ω(n^{1/6})} size [Smolensky 1987]. Even for depth-2 MOD_p ∘ MOD_m circuits, 2^{Ω(n)} lower bounds were known [Barrington Straubing Thérien 1990].

Brynmor Chapman and R. Ryan Williams. Smaller ACC0 Circuits for Symmetric Functions. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 38:1-38:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{chapman_et_al:LIPIcs.ITCS.2022.38, author = {Chapman, Brynmor and Williams, R. Ryan}, title = {{Smaller ACC0 Circuits for Symmetric Functions}}, booktitle = {13th Innovations in Theoretical Computer Science Conference (ITCS 2022)}, pages = {38:1--38:19}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-217-4}, ISSN = {1868-8969}, year = {2022}, volume = {215}, editor = {Braverman, Mark}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.38}, URN = {urn:nbn:de:0030-drops-156342}, doi = {10.4230/LIPIcs.ITCS.2022.38}, annote = {Keywords: ACC, CC, circuit complexity, symmetric functions, Chinese Remainder Theorem} }

Document

**Published in:** LIPIcs, Volume 202, 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)

What sort of code is so difficult to analyze that every potential analyst can discern essentially no information from the code, other than its input-output behavior? In their seminal work on program obfuscation, Barak, Goldreich, Impagliazzo, Rudich, Sahai, Vadhan, and Yang (CRYPTO 2001) proposed the Black-Box Hypothesis, which roughly states that every property of Boolean functions which has an efficient "analyst" and is "code independent" can also be computed by an analyst that only has black-box access to the code. In their formulation of the Black-Box Hypothesis, the "analysts" are arbitrary randomized polynomial-time algorithms, and the "codes" are general (polynomial-size) circuits. If true, the Black-Box Hypothesis would immediately imply NP ̸ ⊂ BPP.
We consider generalized forms of the Black-Box Hypothesis, where the set of "codes" 𝒞 and the set of "analysts" 𝒜 may correspond to other efficient models of computation, from more restricted models such as AC⁰ to more general models such as nondeterministic circuits. We show how lower bounds of the form 𝒞 ̸ ⊂ 𝒜 often imply a corresponding Black-Box Hypothesis for those respective codes and analysts. We investigate the possibility of "complete" problems for the Black-Box Hypothesis: problems in 𝒞 such that they are not in 𝒜 if and only if their corresponding Black-Box Hypothesis is true. Along the way, we prove an equivalence: for nondeterministic circuit classes 𝒞, the "𝒞-circuit satisfiability problem" is not in 𝒜 if and only if the Black-Box Hypothesis is true for analysts in 𝒜.

Brynmor K. Chapman and R. Ryan Williams. Black-Box Hypotheses and Lower Bounds. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 29:1-29:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)

Copy BibTex To Clipboard

@InProceedings{chapman_et_al:LIPIcs.MFCS.2021.29, author = {Chapman, Brynmor K. and Williams, R. Ryan}, title = {{Black-Box Hypotheses and Lower Bounds}}, booktitle = {46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)}, pages = {29:1--29:22}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-201-3}, ISSN = {1868-8969}, year = {2021}, volume = {202}, editor = {Bonchi, Filippo and Puglisi, Simon J.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.29}, URN = {urn:nbn:de:0030-drops-144698}, doi = {10.4230/LIPIcs.MFCS.2021.29}, annote = {Keywords: Black-Box hypothesis, circuit complexity, lower bounds} }

Document

**Published in:** LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)

The best known size lower bounds against unrestricted circuits have remained around 3n for several decades. Moreover, the only known technique for proving lower bounds in this model, gate elimination, is inherently limited to proving lower bounds of less than 5n. In this work, we propose a non-gate-elimination approach for obtaining circuit lower bounds, via certain depth-three lower bounds. We prove that every (unbounded-depth) circuit of size s can be expressed as an OR of 2^{s/3.9} 16-CNFs. For DeMorgan formulas, the best known size lower bounds have been stuck at around n^{3-o(1)} for decades. Under a plausible hypothesis about probabilistic polynomials, we show that n^{4-ε}-size DeMorgan formulas have 2^{n^{1-Ω(ε)}}-size depth-3 circuits which are approximate sums of n^{1-Ω(ε)}-degree polynomials over F₂. While these structural results do not immediately lead to new lower bounds, they do suggest new avenues of attack on these longstanding lower bound problems.
Our results complement the classical depth-3 reduction results of Valiant, which show that logarithmic-depth circuits of linear size can be computed by an OR of 2^{ε n} n^δ-CNFs, and slightly stronger results for series-parallel circuits. It is known that no purely graph-theoretic reduction could yield interesting depth-3 circuits from circuits of super-logarithmic depth. We overcome this limitation (for small-size circuits) by taking into account both the graph-theoretic and functional properties of circuits and formulas.
We show that improvements of the following pseudorandom constructions imply super-linear circuit lower bounds for log-depth circuits via Valiant’s reduction: dispersers for varieties, correlation with constant degree polynomials, matrix rigidity, and hardness for depth-3 circuits with constant bottom fan-in. On the other hand, our depth reductions show that even modest improvements of the known constructions give elementary proofs of improved (but still linear) circuit lower bounds.

Alexander Golovnev, Alexander S. Kulikov, and R. Ryan Williams. Circuit Depth Reductions. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 24:1-24:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)

Copy BibTex To Clipboard

@InProceedings{golovnev_et_al:LIPIcs.ITCS.2021.24, author = {Golovnev, Alexander and Kulikov, Alexander S. and Williams, R. Ryan}, title = {{Circuit Depth Reductions}}, booktitle = {12th Innovations in Theoretical Computer Science Conference (ITCS 2021)}, pages = {24:1--24:20}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-177-1}, ISSN = {1868-8969}, year = {2021}, volume = {185}, editor = {Lee, James R.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.24}, URN = {urn:nbn:de:0030-drops-135633}, doi = {10.4230/LIPIcs.ITCS.2021.24}, annote = {Keywords: Circuit complexity, formula complexity, pseudorandomness, matrix rigidity} }

Document

**Published in:** LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)

A line of work initiated by Fortnow in 1997 has proven model-independent time-space lower bounds for the SAT problem and related problems within the polynomial-time hierarchy. For example, for the SAT problem, the state-of-the-art is that the problem cannot be solved by random-access machines in n^c time and n^o(1) space simultaneously for c < 2cos(π/7) ≈ 1.801.
We extend this lower bound approach to the quantum and randomized domains. Combining Grover’s algorithm with components from SAT time-space lower bounds, we show that there are problems verifiable in O(n) time with quantum Merlin-Arthur protocols that cannot be solved in n^c time and n^o(1) space simultaneously for c < (3+√3)/2 ≈ 2.366, a super-quadratic time lower bound. This result and the prior work on SAT can both be viewed as consequences of a more general formula for time lower bounds against small-space algorithms, whose asymptotics we study in full.
We also show lower bounds against randomized algorithms: there are problems verifiable in O(n) time with (classical) Merlin-Arthur protocols that cannot be solved in n^c randomized time and O(log n) space simultaneously for c < 1.465, improving a result of Diehl. For quantum Merlin-Arthur protocols, the lower bound in this setting can be improved to c < 1.5.

Abhijit S. Mudigonda and R. Ryan Williams. Time-Space Lower Bounds for Simulating Proof Systems with Quantum and Randomized Verifiers. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 50:1-50:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)

Copy BibTex To Clipboard

@InProceedings{mudigonda_et_al:LIPIcs.ITCS.2021.50, author = {Mudigonda, Abhijit S. and Williams, R. Ryan}, title = {{Time-Space Lower Bounds for Simulating Proof Systems with Quantum and Randomized Verifiers}}, booktitle = {12th Innovations in Theoretical Computer Science Conference (ITCS 2021)}, pages = {50:1--50:20}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-177-1}, ISSN = {1868-8969}, year = {2021}, volume = {185}, editor = {Lee, James R.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.50}, URN = {urn:nbn:de:0030-drops-135897}, doi = {10.4230/LIPIcs.ITCS.2021.50}, annote = {Keywords: Time-space tradeoffs, lower bounds, QMA} }

Document

**Published in:** LIPIcs, Volume 154, 37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020)

We continue the program of proving circuit lower bounds via circuit satisfiability algorithms. So far, this program has yielded several concrete results, proving that functions in Quasi-NP = NTIME[n^{(log n)^O(1)}] and NEXP do not have small circuits (in the worst case and/or on average) from various circuit classes C, by showing that C admits non-trivial satisfiability and/or #SAT algorithms which beat exhaustive search by a minor amount.
In this paper, we present a new strong lower bound consequence of non-trivial #SAT algorithm for a circuit class {C}. Say a symmetric Boolean function f(x₁,…,x_n) is sparse if it outputs 1 on O(1) values of ∑_i x_i. We show that for every sparse f, and for all "typical" C, faster #SAT algorithms for C circuits actually imply lower bounds against the circuit class f ∘ C, which may be stronger than C itself. In particular:
- #SAT algorithms for n^k-size C-circuits running in 2ⁿ/n^k time (for all k) imply NEXP does not have f ∘ C-circuits of polynomial size.
- #SAT algorithms for 2^{n^ε}-size C-circuits running in 2^{n-n^ε} time (for some ε > 0) imply Quasi-NP does not have f ∘ C-circuits of polynomial size. Applying #SAT algorithms from the literature, one immediate corollary of our results is that Quasi-NP does not have EMAJ ∘ ACC⁰ ∘ THR circuits of polynomial size, where EMAJ is the "exact majority" function, improving previous lower bounds against ACC⁰ [Williams JACM'14] and ACC⁰ ∘ THR [Williams STOC'14], [Murray-Williams STOC'18]. This is the first nontrivial lower bound against such a circuit class.

Nikhil Vyas and R. Ryan Williams. Lower Bounds Against Sparse Symmetric Functions of ACC Circuits: Expanding the Reach of #SAT Algorithms. In 37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 154, pp. 59:1-59:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)

Copy BibTex To Clipboard

@InProceedings{vyas_et_al:LIPIcs.STACS.2020.59, author = {Vyas, Nikhil and Williams, R. Ryan}, title = {{Lower Bounds Against Sparse Symmetric Functions of ACC Circuits: Expanding the Reach of #SAT Algorithms}}, booktitle = {37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020)}, pages = {59:1--59:17}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-140-5}, ISSN = {1868-8969}, year = {2020}, volume = {154}, editor = {Paul, Christophe and Bl\"{a}ser, Markus}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2020.59}, URN = {urn:nbn:de:0030-drops-119200}, doi = {10.4230/LIPIcs.STACS.2020.59}, annote = {Keywords: #SAT, satisfiability, circuit complexity, exact majority, ACC} }

Document

**Published in:** LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)

We considerably sharpen the known connections between circuit-analysis algorithms and circuit lower bounds, show intriguing equivalences between the analysis of weak circuits and (apparently difficult) circuits, and provide strong new lower bounds for approximately computing Boolean functions with depth-two neural networks and related models.
- We develop approaches to proving THR o THR lower bounds (a notorious open problem), by connecting algorithmic analysis of THR o THR to the provably weaker circuit classes THR o MAJ and MAJ o MAJ, where exponential lower bounds have long been known. More precisely, we show equivalences between algorithmic analysis of THR o THR and these weaker classes. The epsilon-error CAPP problem asks to approximate the acceptance probability of a given circuit to within additive error epsilon; it is the "canonical" derandomization problem. We show:
- There is a non-trivial (2^n/n^{omega(1)} time) 1/poly(n)-error CAPP algorithm for poly(n)-size THR o THR circuits if and only if there is such an algorithm for poly(n)-size MAJ o MAJ.
- There is a delta > 0 and a non-trivial SAT (delta-error CAPP) algorithm for poly(n)-size THR o THR circuits if and only if there is such an algorithm for poly(n)-size THR o MAJ. Similar results hold for depth-d linear threshold circuits and depth-d MAJORITY circuits. These equivalences are proved via new simulations of THR circuits by circuits with MAJ gates.
- We strengthen the connection between non-trivial derandomization (non-trivial CAPP algorithms) for a circuit class C, and circuit lower bounds against C. Previously, [Ben-Sasson and Viola, ICALP 2014] (following [Williams, STOC 2010]) showed that for any polynomial-size class C closed under projections, non-trivial (2^{n}/n^{omega(1)} time) CAPP for OR_{poly(n)} o AND_{3} o C yields NEXP does not have C circuits. We apply Probabilistic Checkable Proofs of Proximity in a new way to show it would suffice to have a non-trivial CAPP algorithm for either XOR_2 o C, AND_2 o C or OR_2 o C.
- A direct corollary of the first two bullets is that NEXP does not have THR o THR circuits would follow from either:
- a non-trivial delta-error CAPP (or SAT) algorithm for poly(n)-size THR o MAJ circuits, or
- a non-trivial 1/poly(n)-error CAPP algorithm for poly(n)-size MAJ o MAJ circuits.
- Applying the above machinery, we extend lower bounds for depth-two neural networks and related models [R. Williams, CCC 2018] to weak approximate computations of Boolean functions. For example, for arbitrarily small epsilon > 0, we prove there are Boolean functions f computable in nondeterministic n^{log n} time such that (for infinitely many n) every polynomial-size depth-two neural network N on n inputs (with sign or ReLU activation) must satisfy max_{x in {0,1}^n}|N(x)-f(x)|>1/2-epsilon. That is, short linear combinations of ReLU gates fail miserably at computing f to within close precision. Similar results are proved for linear combinations of ACC o THR circuits, and linear combinations of low-degree F_p polynomials. These results constitute further progress towards THR o THR lower bounds.

Lijie Chen and R. Ryan Williams. Stronger Connections Between Circuit Analysis and Circuit Lower Bounds, via PCPs of Proximity. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 19:1-19:43, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.CCC.2019.19, author = {Chen, Lijie and Williams, R. Ryan}, title = {{Stronger Connections Between Circuit Analysis and Circuit Lower Bounds, via PCPs of Proximity}}, booktitle = {34th Computational Complexity Conference (CCC 2019)}, pages = {19:1--19:43}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-116-0}, ISSN = {1868-8969}, year = {2019}, volume = {137}, editor = {Shpilka, Amir}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.19}, URN = {urn:nbn:de:0030-drops-108419}, doi = {10.4230/LIPIcs.CCC.2019.19}, annote = {Keywords: PCP of Proximity, Circuit Lower Bounds, Derandomization, Threshold Circuits, ReLU} }

Document

**Published in:** LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)

A frontier open problem in circuit complexity is to prove P^{NP} is not in SIZE[n^k] for all k; this is a necessary intermediate step towards NP is not in P_{/poly}. Previously, for several classes containing P^{NP}, including NP^{NP}, ZPP^{NP}, and S_2 P, such lower bounds have been proved via Karp-Lipton-style Theorems: to prove C is not in SIZE[n^k] for all k, we show that C subset P_{/poly} implies a "collapse" D = C for some larger class D, where we already know D is not in SIZE[n^k] for all k.
It seems obvious that one could take a different approach to prove circuit lower bounds for P^{NP} that does not require proving any Karp-Lipton-style theorems along the way. We show this intuition is wrong: (weak) Karp-Lipton-style theorems for P^{NP} are equivalent to fixed-polynomial size circuit lower bounds for P^{NP}. That is, P^{NP} is not in SIZE[n^k] for all k if and only if (NP subset P_{/poly} implies PH subset i.o.- P^{NP}_{/n}).
Next, we present new consequences of the assumption NP subset P_{/poly}, towards proving similar results for NP circuit lower bounds. We show that under the assumption, fixed-polynomial circuit lower bounds for NP, nondeterministic polynomial-time derandomizations, and various fixed-polynomial time simulations of NP are all equivalent. Applying this equivalence, we show that circuit lower bounds for NP imply better Karp-Lipton collapses. That is, if NP is not in SIZE[n^k] for all k, then for all C in {Parity-P, PP, PSPACE, EXP}, C subset P_{/poly} implies C subset i.o.-NP_{/n^epsilon} for all epsilon > 0. Note that unconditionally, the collapses are only to MA and not NP.
We also explore consequences of circuit lower bounds for a sparse language in NP. Among other results, we show if a polynomially-sparse NP language does not have n^{1+epsilon}-size circuits, then MA subset i.o.-NP_{/O(log n)}, MA subset i.o.-P^{NP[O(log n)]}, and NEXP is not in SIZE[2^{o(m)}]. Finally, we observe connections between these results and the "hardness magnification" phenomena described in recent works.

Lijie Chen, Dylan M. McKay, Cody D. Murray, and R. Ryan Williams. Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 30:1-30:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.CCC.2019.30, author = {Chen, Lijie and McKay, Dylan M. and Murray, Cody D. and Williams, R. Ryan}, title = {{Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems}}, booktitle = {34th Computational Complexity Conference (CCC 2019)}, pages = {30:1--30:21}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-116-0}, ISSN = {1868-8969}, year = {2019}, volume = {137}, editor = {Shpilka, Amir}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.30}, URN = {urn:nbn:de:0030-drops-108525}, doi = {10.4230/LIPIcs.CCC.2019.30}, annote = {Keywords: Karp-Lipton Theorems, Circuit Lower Bounds, Derandomization, Hardness Magnification} }

Document

**Published in:** OASIcs, Volume 61, 1st Symposium on Simplicity in Algorithms (SOSA 2018)

This paper provides both positive and negative results for counting solutions to systems of polynomial equations over a finite field. The general idea is to try to reduce the problem to counting solutions to a single polynomial, where the task is easier. In both cases, simple methods are utilized that we expect will have wider applicability (far beyond algebra).
First, we give an efficient deterministic reduction from approximate counting for a system of (arbitrary) polynomial equations to approximate counting for one equation, over any finite field. We apply this reduction to give a deterministic poly(n,s,log p)/eps^2 time algorithm for approximately counting the fraction of solutions to a system of s quadratic n-variate polynomials over F_p (the finite field of prime order p) to within an additive eps factor, for any prime p. Note that uniform random sampling would already require Omega(s/eps^2) time, so our algorithm behaves as a full derandomization of uniform sampling. The approximate-counting algorithm yields efficient approximate counting for other well-known problems, such as 2-SAT, NAE-3SAT, and 3-Coloring. As a corollary, there is a deterministic algorithm (with analogous running time) for producing solutions to such systems which have at least eps p^n solutions.
Second, we consider the difficulty of exactly counting solutions to a single polynomial of constant degree, over a finite field. (Note that finding a solution in this case is easy.) It has been known for over 20 years that this counting problem is already NP-hard for degree-three polynomials over F_2; however, all known reductions increased the number of variables by a considerable amount. We give a subexponential-time reduction from counting solutions to k-CNF formulas to counting solutions to a degree-k^{O(k)} polynomial (over any finite field of O(1) order) which exactly preserves the number of variables. As a corollary, the Strong Exponential Time Hypothesis (even its weak counting variant #SETH) implies that counting solutions to constant-degree polynomials (even over F_2) requires essentially 2^n time. Similar results hold for counting orthogonal pairs of vectors over F_p.

R. Ryan Williams. Counting Solutions to Polynomial Systems via Reductions. In 1st Symposium on Simplicity in Algorithms (SOSA 2018). Open Access Series in Informatics (OASIcs), Volume 61, pp. 6:1-6:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)

Copy BibTex To Clipboard

@InProceedings{williams:OASIcs.SOSA.2018.6, author = {Williams, R. Ryan}, title = {{Counting Solutions to Polynomial Systems via Reductions}}, booktitle = {1st Symposium on Simplicity in Algorithms (SOSA 2018)}, pages = {6:1--6:15}, series = {Open Access Series in Informatics (OASIcs)}, ISBN = {978-3-95977-064-4}, ISSN = {2190-6807}, year = {2018}, volume = {61}, editor = {Seidel, Raimund}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2018.6}, URN = {urn:nbn:de:0030-drops-83078}, doi = {10.4230/OASIcs.SOSA.2018.6}, annote = {Keywords: counting complexity, polynomial equations, finite field, derandomization, strong exponential time hypothesis} }

Document

**Published in:** LIPIcs, Volume 79, 32nd Computational Complexity Conference (CCC 2017)

We present new consequences of the assumption that time-bounded algorithms can be "compressed" with non-uniform circuits. Our main contribution is an "easiness amplification" lemma for circuits. One instantiation of the lemma says: if n^{1+e}-time, tilde{O}(n)-space computations have n^{1+o(1)} size (non-uniform) circuits for some e > 0, then every problem solvable in polynomial time and tilde{O}(n) space has n^{1+o(1)} size (non-uniform) circuits as well. This amplification has several consequences:
* An easy problem without small LOGSPACE-uniform circuits. For all e > 0, we give a natural decision problem, General Circuit n^e-Composition, that is solvable in about n^{1+e} time, but we prove that polynomial-time and logarithmic-space preprocessing cannot produce n^{1+o(1)}-size circuits for the problem. This shows that there are problems solvable in n^{1+e} time which are not in LOGSPACE-uniform n^{1+o(1)} size, the first result of its kind. We show that our lower bound is non-relativizing, by exhibiting an oracle relative to which the result is false.
* Problems without low-depth LOGSPACE-uniform circuits. For all e > 0, 1 < d < 2, and e < d we give another natural circuit composition problem computable in tilde{O}(n^{1+e}) time, or in O((log n)^d) space (though not necessarily simultaneously) that we prove does not have SPACE[(log n)^e]-uniform circuits of tilde{O}(n) size and O((log n)^e) depth. We also show SAT does not have circuits of tilde{O}(n) size and log^{2-o(1)}(n) depth that can be constructed in log^{2-o(1)}(n) space.
* A strong circuit complexity amplification. For every e > 0, we give a natural circuit composition problem and show that if it has tilde{O}(n)-size circuits (uniform or not), then every problem solvable in 2^{O(n)} time and 2^{O(sqrt{n log n})} space (simultaneously) has 2^{O(sqrt{n log n})}-size circuits (uniform or not). We also show the same consequence holds assuming SAT has tilde{O}(n)-size circuits. As a corollary, if n^{1.1} time computations (or O(n) nondeterministic time computations) have tilde{O}(n)-size circuits, then all problems in exponential time and subexponential space (such as quantified Boolean formulas) have significantly subexponential-size circuits. This is a new connection between the relative circuit complexities of easy and hard problems.

Cody D. Murray and R. Ryan Williams. Easiness Amplification and Uniform Circuit Lower Bounds. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 8:1-8:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)

Copy BibTex To Clipboard

@InProceedings{murray_et_al:LIPIcs.CCC.2017.8, author = {Murray, Cody D. and Williams, R. Ryan}, title = {{Easiness Amplification and Uniform Circuit Lower Bounds}}, booktitle = {32nd Computational Complexity Conference (CCC 2017)}, pages = {8:1--8:21}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-040-8}, ISSN = {1868-8969}, year = {2017}, volume = {79}, editor = {O'Donnell, Ryan}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.8}, URN = {urn:nbn:de:0030-drops-75421}, doi = {10.4230/LIPIcs.CCC.2017.8}, annote = {Keywords: uniform circuit complexity, time complexity, space complexity, non-relativizing, amplification} }

Document

**Published in:** LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)

Given a set of numbers, the k-SUM problem asks for a subset of k numbers that sums to zero. When the numbers are integers, the time and space complexity of k-SUM is generally studied in the word-RAM model; when the numbers are reals, the complexity is studied in the real-RAM model, and space is measured by the number of reals held in memory at any point. We present a time and space efficient deterministic self-reduction for the k-SUM problem which holds for both models, and has many interesting consequences. To illustrate:
- 3-SUM is in deterministic time O(n^2*lg(lg(n))/lg(n)) and space O(sqrt(n*lg(n)/lg(lg(n)))). In general, any polylogarithmic-time improvement over quadratic time for 3-SUM can be converted into an algorithm with an identical time improvement but low space complexity as well.
- 3-SUM is in deterministic time O(n^2) and space O(sqrt(n)), derandomizing an algorithm of Wang.
- A popular conjecture states that 3-SUM requires n^{2-o(1)} time on the word-RAM. We show that the 3-SUM Conjecture is in fact equivalent to the (seemingly weaker) conjecture that every O(n^{.51})-space algorithm for 3-SUM requires at least n^{2-o(1)} time on the word-RAM.
- For k >= 4, k-SUM is in deterministic O(n^{k-2+2/k}) time and O(sqrt(n)) space.

Andrea Lincoln, Virginia Vassilevska Williams, Joshua R. Wang, and R. Ryan Williams. Deterministic Time-Space Trade-Offs for k-SUM. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 58:1-58:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)

Copy BibTex To Clipboard

@InProceedings{lincoln_et_al:LIPIcs.ICALP.2016.58, author = {Lincoln, Andrea and Vassilevska Williams, Virginia and Wang, Joshua R. and Williams, R. Ryan}, title = {{Deterministic Time-Space Trade-Offs for k-SUM}}, booktitle = {43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)}, pages = {58:1--58:14}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-013-2}, ISSN = {1868-8969}, year = {2016}, volume = {55}, editor = {Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.58}, URN = {urn:nbn:de:0030-drops-62250}, doi = {10.4230/LIPIcs.ICALP.2016.58}, annote = {Keywords: 3SUM, kSUM, time-space tradeoff, algorithm} }

Document

Invited Talk

**Published in:** LIPIcs, Volume 41, 24th EACSL Annual Conference on Computer Science Logic (CSL 2015)

Complexity lower bounds like P != NP assert impossibility results for all possible programs of some restricted form. As there are presently enormous gaps in our lower bound knowledge, a central question on the minds of today's complexity theorists is how will we find better ways to reason about all efficient programs?
I argue that some progress can be made by (very deliberately) thinking algorithmically about lower bounds. Slightly more precisely, to prove a lower bound against some class C of programs, we can start by treating C as a set of inputs to another (larger) process, which is intended to perform some basic analysis of programs in C. By carefully studying the algorithmic "meta-analysis" of programs in C, we can learn more about the limitations of the programs being analyzed.
This essay is mostly self-contained; scant knowledge is assumed of the reader.

R. Ryan Williams. Thinking Algorithmically About Impossibility (Invited Talk). In 24th EACSL Annual Conference on Computer Science Logic (CSL 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 41, pp. 14-23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)

Copy BibTex To Clipboard

@InProceedings{williams:LIPIcs.CSL.2015.14, author = {Williams, R. Ryan}, title = {{Thinking Algorithmically About Impossibility}}, booktitle = {24th EACSL Annual Conference on Computer Science Logic (CSL 2015)}, pages = {14--23}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-90-3}, ISSN = {1868-8969}, year = {2015}, volume = {41}, editor = {Kreutzer, Stephan}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CSL.2015.14}, URN = {urn:nbn:de:0030-drops-54396}, doi = {10.4230/LIPIcs.CSL.2015.14}, annote = {Keywords: satisfiability, derandomization, circuit complexity} }

Document

**Published in:** LIPIcs, Volume 33, 30th Conference on Computational Complexity (CCC 2015)

The Minimum Circuit Size Problem (MCSP) is: given the truth table of a Boolean function f and a size parameter k, is the circuit complexity of f at most k? This is the definitive problem of circuit synthesis, and it has been studied since the 1950s. Unlike many problems of its kind, MCSP is not known to be NP-hard, yet an efficient algorithm for this problem also seems very unlikely: for example, MCSP in P would imply there are no pseudorandom functions.
Although most NP-complete problems are complete under strong "local" reduction notions such as poly-logarithmic time projections, we show that MCSP is provably not NP-hard under O(n^(1/2-epsilon))-time projections, for every epsilon > 0. We prove that the NP-hardness of MCSP under (logtime-uniform) AC0 reductions would imply extremely strong lower bounds: NP \not\subset P/poly and E \not\subset i.o.-SIZE(2^(delta * n)) for some delta > 0 (hence P = BPP also follows). We show that even the NP-hardness of MCSP under general polynomial-time reductions would separate complexity classes: EXP != NP \cap P/poly, which implies EXP != ZPP. These results help explain why it has been so difficult to prove that MCSP is NP-hard.
We also consider the nondeterministic generalization of MCSP: the Nondeterministic Minimum Circuit Size Problem (NMCSP), where one wishes to compute the nondeterministic circuit complexity of a given function. We prove that the Sigma_2 P-hardness of NMCSP, even under arbitrary polynomial-time reductions, would imply EXP \not\subset P/poly.

Cody D. Murray and R. Ryan Williams. On the (Non) NP-Hardness of Computing Circuit Complexity. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 365-380, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)

Copy BibTex To Clipboard

@InProceedings{murray_et_al:LIPIcs.CCC.2015.365, author = {Murray, Cody D. and Williams, R. Ryan}, title = {{On the (Non) NP-Hardness of Computing Circuit Complexity}}, booktitle = {30th Conference on Computational Complexity (CCC 2015)}, pages = {365--380}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-81-1}, ISSN = {1868-8969}, year = {2015}, volume = {33}, editor = {Zuckerman, David}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.365}, URN = {urn:nbn:de:0030-drops-50745}, doi = {10.4230/LIPIcs.CCC.2015.365}, annote = {Keywords: circuit lower bounds, Minimum Circuit Size Problem, NP-completeness, projections, Reductions} }

Document

**Published in:** LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)

The k-Detour problem is a basic path-finding problem: given a graph G on n vertices, with specified nodes s and t, and a positive integer k, the goal is to determine if G has an st-path of length exactly dist(s,t) + k, where dist(s,t) is the length of a shortest path from s to t. The k-Detour problem is NP-hard when k is part of the input, so researchers have sought efficient parameterized algorithms for this task, running in f(k)poly(n) time, for f(⋅) as slow-growing as possible.
We present faster algorithms for k-Detour in undirected graphs, running in 1.853^k poly(n) randomized and 4.082^kpoly(n) deterministic time. The previous fastest algorithms for this problem took 2.746^k poly(n) randomized and 6.523^k poly(n) deterministic time [Bezáková-Curticapean-Dell-Fomin, ICALP 2017]. Our algorithms use the fact that detecting a path of a given length in an undirected graph is easier if we are promised that the path belongs to what we call a "bipartitioned" subgraph, where the nodes are split into two parts and the path must satisfy constraints on those parts. Previously, this idea was used to obtain the fastest known algorithm for finding paths of length k in undirected graphs [Björklund-Husfeldt-Kaski-Koivisto, JCSS 2017], intuitively by looking for paths of length k in randomly bipartitioned subgraphs. Our algorithms for k-Detour stem from a new application of this idea, which does not involve choosing the bipartitioned subgraphs randomly.
Our work has direct implications for the k-Longest Detour problem, another related path-finding problem. In this problem, we are given the same input as in k-Detour, but are now tasked with determining if G has an st-path of length at least dist(s,t)+k. Our results for k-Detour imply that we can solve k-Longest Detour in 3.432^k poly(n) randomized and 16.661^k poly(n) deterministic time. The previous fastest algorithms for this problem took 7.539^k poly(n) randomized and 42.549^k poly(n) deterministic time [Fomin et al., STACS 2022].

Shyan Akmal, Virginia Vassilevska Williams, Ryan Williams, and Zixuan Xu. Faster Detours in Undirected Graphs. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 7:1-7:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)

Copy BibTex To Clipboard

@InProceedings{akmal_et_al:LIPIcs.ESA.2023.7, author = {Akmal, Shyan and Vassilevska Williams, Virginia and Williams, Ryan and Xu, Zixuan}, title = {{Faster Detours in Undirected Graphs}}, booktitle = {31st Annual European Symposium on Algorithms (ESA 2023)}, pages = {7:1--7:17}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-295-2}, ISSN = {1868-8969}, year = {2023}, volume = {274}, editor = {G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.7}, URN = {urn:nbn:de:0030-drops-186601}, doi = {10.4230/LIPIcs.ESA.2023.7}, annote = {Keywords: path finding, detours, parameterized complexity, exact algorithms} }

Document

**Published in:** LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)

Following Razborov and Rudich, a "natural property" for proving a circuit lower bound satisfies three axioms: constructivity, largeness, and usefulness. In 2013, Williams proved that for any reasonable circuit class C, NEXP ⊂ C is equivalent to the existence of a constructive property useful against C. Here, a property is constructive if it can be decided in poly(N) time, where N = 2ⁿ is the length of the truth-table of the given n-input function.
Recently, Fan, Li, and Yang initiated the study of black-box natural properties, which require a much stronger notion of constructivity, called black-box constructivity: the property should be decidable in randomized polylog(N) time, given oracle access to the n-input function. They showed that most proofs based on random restrictions yield black-box natural properties, and demonstrated limitations on what black-box natural properties can prove.
In this paper, perhaps surprisingly, we prove that the equivalence of Williams holds even with this stronger notion of black-box constructivity: for any reasonable circuit class C, NEXP ⊂ C is equivalent to the existence of a black-box constructive property useful against C. The main technical ingredient in proving this equivalence is a smooth, strong, and locally-decodable probabilistically checkable proof (PCP), which we construct based on a recent work by Paradise. As a by-product, we show that average-case witness lower bounds for PCP verifiers follow from NEXP lower bounds.
We also show that randomness is essential in the definition of black-box constructivity: we unconditionally prove that there is no deterministic polylog(N)-time constructive property that is useful against even polynomial-size AC⁰ circuits.

Lijie Chen, Ryan Williams, and Tianqi Yang. Black-Box Constructive Proofs Are Unavoidable. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 35:1-35:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)

Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2023.35, author = {Chen, Lijie and Williams, Ryan and Yang, Tianqi}, title = {{Black-Box Constructive Proofs Are Unavoidable}}, booktitle = {14th Innovations in Theoretical Computer Science Conference (ITCS 2023)}, pages = {35:1--35:24}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-263-1}, ISSN = {1868-8969}, year = {2023}, volume = {251}, editor = {Tauman Kalai, Yael}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.35}, URN = {urn:nbn:de:0030-drops-175380}, doi = {10.4230/LIPIcs.ITCS.2023.35}, annote = {Keywords: Circuit lower bounds, natural proofs, probabilistic checkable proofs} }

Document

**Published in:** LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)

This paper studies the interaction of oracles with algorithmic approaches to proving circuit complexity lower bounds, establishing new results on two different kinds of questions.
1) We revisit some prominent open questions in circuit lower bounds, and provide a clean way of viewing them as circuit upper bound questions. Let Missing-String be the (total) search problem of producing a string that does not appear in a given list L containing M bit-strings of length N, where M < 2ⁿ. We show in a generic way how algorithms and uniform circuits (from restricted classes) for Missing-String imply complexity lower bounds (and in some cases, the converse holds as well).
We give a local algorithm for Missing-String, which can compute any desired output bit making very few probes into the input, when the number of strings M is small enough. We apply this to prove a new nearly-optimal (up to oracles) time hierarchy theorem with advice.
We show that the problem of constructing restricted uniform circuits for Missing-String is essentially equivalent to constructing functions without small non-uniform circuits, in a relativizing way. For example, we prove that small uniform depth-3 circuits for Missing-String would imply exponential circuit lower bounds for Σ₂ EXP, and depth-3 lower bounds for Missing-String would imply non-trivial circuits (relative to an oracle) for Σ₂ EXP problems. Both conclusions are longstanding open problems in circuit complexity.
2) It has been known since Impagliazzo, Kabanets, and Wigderson [JCSS 2002] that generic derandomizations improving subexponentially over exhaustive search would imply lower bounds such as NEXP ̸ ⊂ 𝖯/poly. Williams [SICOMP 2013] showed that Circuit-SAT algorithms running barely faster than exhaustive search would imply similar lower bounds. The known proofs of such results do not relativize (they use techniques from interactive proofs/PCPs). However, it has remained open whether there is an oracle under which the generic implications from circuit-analysis algorithms to circuit lower bounds fail.
Building on an oracle of Fortnow, we construct an oracle relative to which the circuit approximation probability problem (CAPP) is in 𝖯, yet EXP^{NP} has polynomial-size circuits.
We construct an oracle relative to which SAT can be solved in "half-exponential" time, yet exponential time (EXP) has polynomial-size circuits. Improving EXP to NEXP would give an oracle relative to which Σ₂ 𝖤 has "half-exponential" size circuits, which is open. (Recall it is known that Σ₂ 𝖤 is not in "sub-half-exponential" size, and the proof relativizes.) Moreover, the running time of the SAT algorithm cannot be improved: relative to all oracles, if SAT is in "sub-half-exponential" time then EXP does not have polynomial-size circuits.

Nikhil Vyas and Ryan Williams. On Oracles and Algorithmic Methods for Proving Lower Bounds. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 99:1-99:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)

Copy BibTex To Clipboard

@InProceedings{vyas_et_al:LIPIcs.ITCS.2023.99, author = {Vyas, Nikhil and Williams, Ryan}, title = {{On Oracles and Algorithmic Methods for Proving Lower Bounds}}, booktitle = {14th Innovations in Theoretical Computer Science Conference (ITCS 2023)}, pages = {99:1--99:26}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-263-1}, ISSN = {1868-8969}, year = {2023}, volume = {251}, editor = {Tauman Kalai, Yael}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.99}, URN = {urn:nbn:de:0030-drops-176021}, doi = {10.4230/LIPIcs.ITCS.2023.99}, annote = {Keywords: oracles, relativization, circuit complexity, missing string, exponential hierarchy} }

Document

**Published in:** LIPIcs, Volume 241, 47th International Symposium on Mathematical Foundations of Computer Science (MFCS 2022)

In 1981, Neil Immerman described a two-player game, which he called the "separability game" [Neil Immerman, 1981], that captures the number of quantifiers needed to describe a property in first-order logic. Immerman’s paper laid the groundwork for studying the number of quantifiers needed to express properties in first-order logic, but the game seemed to be too complicated to study, and the arguments of the paper almost exclusively used quantifier rank as a lower bound on the total number of quantifiers. However, last year Fagin, Lenchner, Regan and Vyas [Fagin et al., 2021] rediscovered the game, provided some tools for analyzing them, and showed how to utilize them to characterize the number of quantifiers needed to express linear orders of different sizes. In this paper, we push forward in the study of number of quantifiers as a bona fide complexity measure by establishing several new results. First we carefully distinguish minimum number of quantifiers from the more usual descriptive complexity measures, minimum quantifier rank and minimum number of variables. Then, for each positive integer k, we give an explicit example of a property of finite structures (in particular, of finite graphs) that can be expressed with a sentence of quantifier rank k, but where the same property needs 2^Ω(k²) quantifiers to be expressed. We next give the precise number of quantifiers needed to distinguish two rooted trees of different depths. Finally, we give a new upper bound on the number of quantifiers needed to express s-t connectivity, improving the previous known bound by a constant factor.

Ronald Fagin, Jonathan Lenchner, Nikhil Vyas, and Ryan Williams. On the Number of Quantifiers as a Complexity Measure. In 47th International Symposium on Mathematical Foundations of Computer Science (MFCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 241, pp. 48:1-48:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{fagin_et_al:LIPIcs.MFCS.2022.48, author = {Fagin, Ronald and Lenchner, Jonathan and Vyas, Nikhil and Williams, Ryan}, title = {{On the Number of Quantifiers as a Complexity Measure}}, booktitle = {47th International Symposium on Mathematical Foundations of Computer Science (MFCS 2022)}, pages = {48:1--48:14}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-256-3}, ISSN = {1868-8969}, year = {2022}, volume = {241}, editor = {Szeider, Stefan and Ganian, Robert and Silva, Alexandra}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2022.48}, URN = {urn:nbn:de:0030-drops-168460}, doi = {10.4230/LIPIcs.MFCS.2022.48}, annote = {Keywords: number of quantifiers, multi-structural games, complexity measure, s-t connectivity, trees, rooted trees} }

Document

**Published in:** LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)

In a Merlin-Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin-Arthur proof systems for some key problems in fine-grained complexity. In several cases our proof systems have optimal running time. Our main results include:
- Certifying that a list of n integers has no 3-SUM solution can be done in Merlin-Arthur time Õ(n). Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in Õ(n^{1.5}) time (that is, there is a proof system with proofs of length Õ(n^{1.5}) and a deterministic verifier running in Õ(n^{1.5}) time).
- Counting the number of k-cliques with total edge weight equal to zero in an n-node graph can be done in Merlin-Arthur time Õ(n^{⌈ k/2⌉}) (where k ≥ 3). For odd k, this bound can be further improved for sparse graphs: for example, counting the number of zero-weight triangles in an m-edge graph can be done in Merlin-Arthur time Õ(m). Previous Merlin-Arthur protocols by Williams [CCC'16] and Björklund and Kaski [PODC'16] could only count k-cliques in unweighted graphs, and had worse running times for small k.
- Computing the All-Pairs Shortest Distances matrix for an n-node graph can be done in Merlin-Arthur time Õ(n²). Note this is optimal, as the matrix can have Ω(n²) nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an Õ(n^{2.94}) nondeterministic time algorithm.
- Certifying that an n-variable k-CNF is unsatisfiable can be done in Merlin-Arthur time 2^{n/2 - n/O(k)}. We also observe an algebrization barrier for the previous 2^{n/2}⋅ poly(n)-time Merlin-Arthur protocol of R. Williams [CCC'16] for #SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol for k-UNSAT running in 2^{n/2}/n^{ω(1)} time. Therefore we have to exploit non-algebrizing properties to obtain our new protocol.
- Certifying a Quantified Boolean Formula is true can be done in Merlin-Arthur time 2^{4n/5}⋅ poly(n). Previously, the only nontrivial result known along these lines was an Arthur-Merlin-Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in 2^{2n/3}⋅poly(n) time. Due to the centrality of these problems in fine-grained complexity, our results have consequences for many other problems of interest. For example, our work implies that certifying there is no Subset Sum solution to n integers can be done in Merlin-Arthur time 2^{n/3}⋅poly(n), improving on the previous best protocol by Nederlof [IPL 2017] which took 2^{0.49991n}⋅poly(n) time.

Shyan Akmal, Lijie Chen, Ce Jin, Malvika Raj, and Ryan Williams. Improved Merlin-Arthur Protocols for Central Problems in Fine-Grained Complexity. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 3:1-3:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{akmal_et_al:LIPIcs.ITCS.2022.3, author = {Akmal, Shyan and Chen, Lijie and Jin, Ce and Raj, Malvika and Williams, Ryan}, title = {{Improved Merlin-Arthur Protocols for Central Problems in Fine-Grained Complexity}}, booktitle = {13th Innovations in Theoretical Computer Science Conference (ITCS 2022)}, pages = {3:1--3:25}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-217-4}, ISSN = {1868-8969}, year = {2022}, volume = {215}, editor = {Braverman, Mark}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.3}, URN = {urn:nbn:de:0030-drops-155991}, doi = {10.4230/LIPIcs.ITCS.2022.3}, annote = {Keywords: Fine-grained complexity, Merlin-Arthur proofs} }

Document

Track A: Algorithms, Complexity and Games

**Published in:** LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)

We show that the permanent of an n x n matrix over any finite ring of r <= n elements can be computed with a deterministic 2^{n-Omega(n/r)} time algorithm. This improves on a Las Vegas algorithm running in expected 2^{n-Omega(n/(r log r))} time, implicit in [Björklund, Husfeldt, and Lyckberg, IPL 2017]. For the permanent over the integers of a 0/1-matrix with exactly d ones per row and column, we provide a deterministic 2^{n-Omega(n/(d^{3/4)})} time algorithm. This improves on a 2^{n-Omega(n/d)} time algorithm in [Cygan and Pilipczuk ICALP 2013]. We also show that the number of Hamiltonian cycles in an n-vertex directed graph of average degree delta can be computed by a deterministic 2^{n-Omega(n/(delta))} time algorithm. This improves on a Las Vegas algorithm running in expected 2^{n-Omega(n/poly(delta))} time in [Björklund, Kaski, and Koutis, ICALP 2017].
A key tool in our approach is a reduction from computing the permanent to listing pairs of dissimilar vectors from two sets of vectors, i.e., vectors over a finite set that differ in each coordinate, building on an observation of [Bax and Franklin, Algorithmica 2002]. We propose algorithms that can be used both to derandomise the construction of Bax and Franklin, and efficiently list dissimilar pairs using several algorithmic tools. We also give a simple randomised algorithm resulting in Monte Carlo algorithms within the same time bounds.
Our new fast algorithms for listing dissimilar vector pairs from two sets of vectors are inspired by recent algorithms for detecting and counting orthogonal vectors by [Abboud, Williams, and Yu, SODA 2015] and [Chan and Williams, SODA 2016].

Andreas Björklund and Ryan Williams. Computing Permanents and Counting Hamiltonian Cycles by Listing Dissimilar Vectors. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 25:1-25:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{bjorklund_et_al:LIPIcs.ICALP.2019.25, author = {Bj\"{o}rklund, Andreas and Williams, Ryan}, title = {{Computing Permanents and Counting Hamiltonian Cycles by Listing Dissimilar Vectors}}, booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)}, pages = {25:1--25:14}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-109-2}, ISSN = {1868-8969}, year = {2019}, volume = {132}, editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.25}, URN = {urn:nbn:de:0030-drops-106018}, doi = {10.4230/LIPIcs.ICALP.2019.25}, annote = {Keywords: permanent, Hamiltonian cycle, orthogonal vectors} }

Document

Track A: Algorithms, Complexity and Games

**Published in:** LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)

We consider the problem of finding solutions to systems of polynomial equations over a finite field. Lokshtanov et al. [SODA'17] recently obtained the first worst-case algorithms that beat exhaustive search for this problem. In particular for degree-d equations modulo two in n variables, they gave an O^*(2^{(1-1/(5d))n}) time algorithm, and for the special case d=2 they gave an O^*(2^{0.876n}) time algorithm.
We modify their approach in a way that improves these running times to O^*(2^{(1-1/(2.7d))n}) and O^*{2^{0.804n}), respectively. In particular, our latter bound - that holds for all systems of quadratic equations modulo 2 - comes close to the O^*(2^{0.792n}) expected time bound of an algorithm empirically found to hold for random equation systems in Bardet et al. [J. Complexity, 2013]. Our improvement involves three observations:
1) The Valiant-Vazirani lemma can be used to reduce the solution-finding problem to that of counting solutions modulo 2.
2) The monomials in the probabilistic polynomials used in this solution-counting modulo 2 have a special form that we exploit to obtain better bounds on their number than in Lokshtanov et al. [SODA'17].
3) The problem of solution-counting modulo 2 can be "embedded" in a smaller instance of the original problem, which enables us to apply the algorithm as a subroutine to itself.

Andreas Björklund, Petteri Kaski, and Ryan Williams. Solving Systems of Polynomial Equations over GF(2) by a Parity-Counting Self-Reduction. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 26:1-26:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{bjorklund_et_al:LIPIcs.ICALP.2019.26, author = {Bj\"{o}rklund, Andreas and Kaski, Petteri and Williams, Ryan}, title = {{Solving Systems of Polynomial Equations over GF(2) by a Parity-Counting Self-Reduction}}, booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)}, pages = {26:1--26:13}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-109-2}, ISSN = {1868-8969}, year = {2019}, volume = {132}, editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.26}, URN = {urn:nbn:de:0030-drops-106023}, doi = {10.4230/LIPIcs.ICALP.2019.26}, annote = {Keywords: equation systems, polynomial method} }

Document

**Published in:** LIPIcs, Volume 89, 12th International Symposium on Parameterized and Exact Computation (IPEC 2017)

We present two new data structures for computing values of an n-variate polynomial P of degree at most d over a finite field of q elements. Assuming that d divides q-1, our first data structure relies on (d+1)^{n+2} tabulated values of P to produce the value of P at any of the q^n points using O(nqd^2) arithmetic operations in the finite field. Assuming that s divides d and d/s divides q-1, our second data structure assumes that P satisfies a degree-separability condition and relies on (d/s+1)^{n+s} tabulated values to produce the value of P at any point using O(nq^ssq) arithmetic operations. Our data structures are based on generalizing upper-bound constructions due to Mockenhaupt and Tao (2004), Saraf and Sudan (2008), and Dvir (2009) for Kakeya sets in finite vector spaces from linear to higher-degree polynomial curves.
As an application we show that the new data structures enable a faster algorithm for computing integer-valued fermionants, a family of self-reducible polynomial functions introduced by Chandrasekharan and Wiese (2011) that captures numerous fundamental algebraic and combinatorial invariants such as the determinant, the permanent, the number of Hamiltonian cycles in a directed multigraph, as well as certain partition functions of strongly correlated electron systems in statistical physics. In particular, a corollary of our main theorem for fermionants is that the permanent of an m-by-m integer matrix with entries bounded in absolute value by a constant can be computed in time 2^{m-Omega(sqrt(m/log log m))}, improving an earlier algorithm of Bjorklund (2016) that runs in time 2^{m-Omega(sqrt(m/log m))}.

Andreas Björklund, Petteri Kaski, and Ryan Williams. Generalized Kakeya Sets for Polynomial Evaluation and Faster Computation of Fermionants. In 12th International Symposium on Parameterized and Exact Computation (IPEC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 89, pp. 6:1-6:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)

Copy BibTex To Clipboard

@InProceedings{bjorklund_et_al:LIPIcs.IPEC.2017.6, author = {Bj\"{o}rklund, Andreas and Kaski, Petteri and Williams, Ryan}, title = {{Generalized Kakeya Sets for Polynomial Evaluation and Faster Computation of Fermionants}}, booktitle = {12th International Symposium on Parameterized and Exact Computation (IPEC 2017)}, pages = {6:1--6:13}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-051-4}, ISSN = {1868-8969}, year = {2018}, volume = {89}, editor = {Lokshtanov, Daniel and Nishimura, Naomi}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2017.6}, URN = {urn:nbn:de:0030-drops-85728}, doi = {10.4230/LIPIcs.IPEC.2017.6}, annote = {Keywords: Besicovitch set, fermionant, finite field, finite vector space, Hamiltonian cycle, homogeneous polynomial, Kakeya set, permanent, polynomial evaluatio} }

Document

**Published in:** Dagstuhl Reports, Volume 5, Issue 4 (2015)

This report documents the program and the outcomes of Dagstuhl Seminar 15171 "Theory and Practice of SAT Solving". The purpose of this workshop was to explore one of the most significant problems in all of computer science, namely that of computing whether formulas in propositional logic are satisfiable or not. This problem is believed to be intractable in general (by the theory of NP-completeness). However, the last two decades have seen dramatic developments in algorithmic techniques, and today so-called SAT solvers are routinely and successfully used to solve large-scale real-world instances in a wide range of application areas.
A surprising aspect of this development is that the best current SAT solvers are still to a large extent based on methods from the early 1960s, which can often handle formulas with millions of variables but may also get hopelessly stuck on formulas with just a few hundred variables. The fundamental question of when SAT solvers perform well or badly, and what underlying mathematical properties of the formulas influence SAT solver performance, remains very poorly understood.
Another intriguing aspect is that much stronger mathematical methods of reasoning about propositional logic formulas are known today, in particular methods based on algebra and geometry, and these methods would seem to have great potential based on theoretical studies. However, attempts at harnessing the power of such methods have conspicuously failed to deliver any significant
improvements in practical performance. This workshop gathered leading researchers in applied and theoretical areas of SAT and computational complexity to stimulate an increased exchange of ideas between these two communities. We see great opportunities for fruitful interplay between theoretical and applied research in this area, and believe that this workshop showed beyond doubt that a more vigorous interaction between the two has potential for major long-term impact in computer science, as well for applications in industry.

Armin Biere, Vijay Ganesh, Martin Grohe, Jakob Nordström, and Ryan Williams. Theory and Practice of SAT Solving (Dagstuhl Seminar 15171). In Dagstuhl Reports, Volume 5, Issue 4, pp. 98-122, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)

Copy BibTex To Clipboard

@Article{biere_et_al:DagRep.5.4.98, author = {Biere, Armin and Ganesh, Vijay and Grohe, Martin and Nordstr\"{o}m, Jakob and Williams, Ryan}, title = {{Theory and Practice of SAT Solving (Dagstuhl Seminar 15171)}}, pages = {98--122}, journal = {Dagstuhl Reports}, ISSN = {2192-5283}, year = {2015}, volume = {5}, number = {4}, editor = {Biere, Armin and Ganesh, Vijay and Grohe, Martin and Nordstr\"{o}m, Jakob and Williams, Ryan}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.5.4.98}, URN = {urn:nbn:de:0030-drops-53520}, doi = {10.4230/DagRep.5.4.98}, annote = {Keywords: SAT, Boolean SAT solvers, SAT solving, conflict-driven clause learning, Gr\"{o}bner bases, pseudo-Boolean solvers, proof complexity, computational complexity, parameterized complexity} }

Document

**Published in:** Dagstuhl Reports, Volume 3, Issue 8 (2013)

This report documents the program and the outcomes of Dagstuhl Seminar
13331 "Exponential Algorithms: Algorithms and Complexity Beyond
Polynomial Time". Problems are often solved in practice by algorithms with worst-case exponential time complexity. It is of interest to find the fastest algorithms for a given problem, be it polynomial, exponential, or something in between. The focus of the Seminar is on finer-grained notions of complexity
than np-completeness and on understanding the exact complexities of problems.
The report provides a rationale for the workshop and chronicles the presentations at the workshop. The report notes the progress on the open problems posed at the past workshops on the same topic. It also reports a collection of results that cite the presentations at the previous seminar. The docoument presents the collection of the abstracts of the results presented
at the Seminar. It also presents a compendium of open problems.

Thore Husfeldt, Ramamohan Paturi, Gregory B. Sorkin, and Ryan Williams. Exponential Algorithms: Algorithms and Complexity Beyond Polynomial Time (Dagstuhl Seminar 13331). In Dagstuhl Reports, Volume 3, Issue 8, pp. 40-72, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)

Copy BibTex To Clipboard

@Article{husfeldt_et_al:DagRep.3.8.40, author = {Husfeldt, Thore and Paturi, Ramamohan and Sorkin, Gregory B. and Williams, Ryan}, title = {{Exponential Algorithms: Algorithms and Complexity Beyond Polynomial Time (Dagstuhl Seminar 13331)}}, pages = {40--72}, journal = {Dagstuhl Reports}, ISSN = {2192-5283}, year = {2013}, volume = {3}, number = {8}, editor = {Husfeldt, Thore and Paturi, Ramamohan and Sorkin, Gregory B. and Williams, Ryan}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.3.8.40}, URN = {urn:nbn:de:0030-drops-43422}, doi = {10.4230/DagRep.3.8.40}, annote = {Keywords: Algorithms, exponential time algorithms, exact algorithms, computational complexity, satisfiability} }

Document

**Published in:** LIPIcs, Volume 5, 27th International Symposium on Theoretical Aspects of Computer Science (2010)

A fertile area of recent research has demonstrated concrete polynomial time lower bounds for solving natural hard problems on restricted computational models. Among these problems are Satisfiability, Vertex Cover, Hamilton Path, $\text{MOD}_6\text{-SAT}$, Majority-of-Majority-SAT, and Tautologies, to name a few. The proofs of these lower bounds follow a certain proof-by-contradiction strategy that we call {\em alternation-trading}. An important open problem is to determine how powerful such proofs can possibly be.
We propose a methodology for studying these proofs that makes them amenable to both formal analysis and automated theorem proving. We prove that the search for better lower bounds can often be turned into a problem of solving a large series of linear programming instances. Implementing a small-scale theorem prover based on this result, we extract new human-readable time lower bounds for several problems. This framework can also be used to prove concrete limitations on the current techniques.

Ryan Williams. Alternation-Trading Proofs, Linear Programming, and Lower Bounds. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 669-680, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)

Copy BibTex To Clipboard

@InProceedings{williams:LIPIcs.STACS.2010.2494, author = {Williams, Ryan}, title = {{Alternation-Trading Proofs, Linear Programming, and Lower Bounds}}, booktitle = {27th International Symposium on Theoretical Aspects of Computer Science}, pages = {669--680}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-16-3}, ISSN = {1868-8969}, year = {2010}, volume = {5}, editor = {Marion, Jean-Yves and Schwentick, Thomas}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2494}, URN = {urn:nbn:de:0030-drops-24940}, doi = {10.4230/LIPIcs.STACS.2010.2494}, annote = {Keywords: Time-space tradeoffs, lower bounds, alternation, linear programming} }

Document

**Published in:** Dagstuhl Seminar Proceedings, Volume 8431, Moderately Exponential Time Algorithms (2008)

Two problem sessions were part of the seminar on Moderately Exponential Time Algorithms. Some of the open problems presented at those sessions have been collected.

Fedor V. Fomin, Kazuo Iwama, Dieter Kratsch, Petteri Kaski, Mikko Koivisto, Lukasz Kowalik, Yoshio Okamoto, Johan van Rooij, and Ryan Williams. 08431 Open Problems – Moderately Exponential Time Algorithms. In Moderately Exponential Time Algorithms. Dagstuhl Seminar Proceedings, Volume 8431, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)

Copy BibTex To Clipboard

@InProceedings{fomin_et_al:DagSemProc.08431.3, author = {Fomin, Fedor V. and Iwama, Kazuo and Kratsch, Dieter and Kaski, Petteri and Koivisto, Mikko and Kowalik, Lukasz and Okamoto, Yoshio and van Rooij, Johan and Williams, Ryan}, title = {{08431 Open Problems – Moderately Exponential Time Algorithms}}, booktitle = {Moderately Exponential Time Algorithms}, pages = {1--8}, series = {Dagstuhl Seminar Proceedings (DagSemProc)}, ISSN = {1862-4405}, year = {2008}, volume = {8431}, editor = {Fedor V. Fomin and Kazuo Iwama and Dieter Kratsch}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08431.3}, URN = {urn:nbn:de:0030-drops-17986}, doi = {10.4230/DagSemProc.08431.3}, annote = {Keywords: Algorithms, NP-hard problems, Moderately Exponential Time Algorithms} }

Document

**Published in:** LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)

In the Orthogonal Vectors (OV) problem, we wish to determine if there is an orthogonal pair of vectors among n Boolean vectors in d dimensions. The OV Conjecture (OVC) posits that OV requires n^{2-o(1)} time to solve, for all d=omega(log n). Assuming the OVC, optimal time lower bounds have been proved for many prominent problems in P, such as Edit Distance, Frechet Distance, Longest Common Subsequence, and approximating the diameter of a graph.
We prove that OVC is true in several computational models of interest:
- For all sufficiently large n and d, OV for n vectors in {0,1}^d has branching program complexity Theta~(n * min(n,2^d)). In particular, the lower and upper bounds match up to polylog factors.
- OV has Boolean formula complexity Theta~(n * min(n,2^d)), over all complete bases of O(1) fan-in.
- OV requires Theta~(n * min(n,2^d)) wires, in formulas comprised of gates computing arbitrary symmetric functions of unbounded fan-in.
Our lower bounds basically match the best known (quadratic) lower bounds for any explicit function in those models. Analogous lower bounds hold for many related problems shown to be hard under OVC, such as Batch Partial Match, Batch Subset Queries, and Batch Hamming Nearest Neighbors, all of which have very succinct reductions to OV.
The proofs use a certain kind of input restriction that is different from typical random restrictions where variables are assigned independently. We give a sense in which independent random restrictions cannot be used to show hardness, in that OVC is false in the "average case" even for AC^0 formulas:
For all p in (0,1) there is a delta_p > 0 such that for every n and d, OV instances with input bits independently set to 1 with probability p (and 0 otherwise) can be solved with AC^0 formulas of O(n^{2-delta_p}) size, on all but a o_n(1) fraction of instances. Moreover, lim_{p - > 1}delta_p = 1.

Daniel M. Kane and Richard Ryan Williams. The Orthogonal Vectors Conjecture for Branching Programs and Formulas. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 48:1-48:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{kane_et_al:LIPIcs.ITCS.2019.48, author = {Kane, Daniel M. and Williams, Richard Ryan}, title = {{The Orthogonal Vectors Conjecture for Branching Programs and Formulas}}, booktitle = {10th Innovations in Theoretical Computer Science Conference (ITCS 2019)}, pages = {48:1--48:15}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-095-8}, ISSN = {1868-8969}, year = {2019}, volume = {124}, editor = {Blum, Avrim}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.48}, URN = {urn:nbn:de:0030-drops-101418}, doi = {10.4230/LIPIcs.ITCS.2019.48}, annote = {Keywords: fine-grained complexity, orthogonal vectors, branching programs, symmetric functions, Boolean formulas} }

Document

**Published in:** LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)

We define a model of size-S R-way branching programs with oracles that can make up to S distinct oracle queries over all of their possible inputs, and generalize a lower bound proof strategy of Beame [SICOMP 1991] to apply in the case of random oracles. Through a series of succinct reductions, we prove that the following problems require randomized algorithms where the product of running time and space usage must be Omega(n^2/poly(log n)) to obtain correct answers with constant nonzero probability, even for algorithms with constant-time access to a uniform random oracle (i.e., a uniform random hash function):
- Given an unordered list L of n elements from [n] (possibly with repeated elements), output [n]-L.
- Counting satisfying assignments to a given 2CNF, and printing any satisfying assignment to a given 3CNF. Note it is a major open problem to prove a time-space product lower bound of n^{2-o(1)} for the decision version of SAT, or even for the decision problem Majority-SAT.
- Printing the truth table of a given CNF formula F with k inputs and n=O(2^k) clauses, with values printed in lexicographical order (i.e., F(0^k), F(0^{k-1}1), ..., F(1^k)). Thus we have a 4^k/poly(k) lower bound in this case.
- Evaluating a circuit with n inputs and O(n) outputs.
As our lower bounds are based on R-way branching programs, they hold for any reasonable model of computation (e.g. log-word RAMs and multitape Turing machines).

Dylan M. McKay and Richard Ryan Williams. Quadratic Time-Space Lower Bounds for Computing Natural Functions with a Random Oracle. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 56:1-56:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{mckay_et_al:LIPIcs.ITCS.2019.56, author = {McKay, Dylan M. and Williams, Richard Ryan}, title = {{Quadratic Time-Space Lower Bounds for Computing Natural Functions with a Random Oracle}}, booktitle = {10th Innovations in Theoretical Computer Science Conference (ITCS 2019)}, pages = {56:1--56:20}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-095-8}, ISSN = {1868-8969}, year = {2019}, volume = {124}, editor = {Blum, Avrim}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.56}, URN = {urn:nbn:de:0030-drops-101493}, doi = {10.4230/LIPIcs.ITCS.2019.56}, annote = {Keywords: branching programs, random oracles, time-space tradeoffs, lower bounds, SAT, counting complexity} }

Document

Invited Paper

**Published in:** LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)

In 2010, the author proposed a program for proving lower bounds in circuit complexity, via faster algorithms for circuit satisfiability and related problems. This talk will give an overview of how the program works, report on the successes of this program so far, and outline open frontiers that have yet to be resolved.

Richard Ryan Williams. Lower Bounds by Algorithm Design: A Progress Report (Invited Paper). In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, p. 4:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)

Copy BibTex To Clipboard

@InProceedings{williams:LIPIcs.ICALP.2018.4, author = {Williams, Richard Ryan}, title = {{Lower Bounds by Algorithm Design: A Progress Report}}, booktitle = {45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)}, pages = {4:1--4:1}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-076-7}, ISSN = {1868-8969}, year = {2018}, volume = {107}, editor = {Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.4}, URN = {urn:nbn:de:0030-drops-90088}, doi = {10.4230/LIPIcs.ICALP.2018.4}, annote = {Keywords: circuit complexity, satisfiability, derandomization} }

Document

**Published in:** LIPIcs, Volume 102, 33rd Computational Complexity Conference (CCC 2018)

We consider the problem of representing Boolean functions exactly by "sparse" linear combinations (over R) of functions from some "simple" class C. In particular, given C we are interested in finding low-complexity functions lacking sparse representations. When C forms a basis for the space of Boolean functions (e.g., the set of PARITY functions or the set of conjunctions) this sort of problem has a well-understood answer; the problem becomes interesting when C is "overcomplete" and the set of functions is not linearly independent. We focus on the cases where C is the set of linear threshold functions, the set of rectified linear units (ReLUs), and the set of low-degree polynomials over a finite field, all of which are well-studied in different contexts.
We provide generic tools for proving lower bounds on representations of this kind. Applying these, we give several new lower bounds for "semi-explicit" Boolean functions. Let alpha(n) be an unbounded function such that n^{alpha(n)} is time constructible (e.g. alpha(n) = log^*(n)). We show:
- Functions in NTIME[n^{alpha(n)}] that require super-polynomially many linear threshold functions to represent (depth-two neural networks with sign activation function, a special case of depth-two threshold circuit lower bounds).
- Functions in NTIME[n^{alpha(n)}] that require super-polynomially many ReLU gates to represent (depth-two neural networks with ReLU activation function).
- Functions in NTIME[n^{alpha(n)}] that require super-polynomially many O(1)-degree F_p-polynomials to represent exactly, for every prime p (related to problems regarding Higher-Order "Uncertainty Principles"). We also obtain a function in E^{NP} requiring 2^{Omega(n)} linear combinations.
- Functions in NTIME[n^{poly(log n)}] that require super-polynomially many ACC ° THR circuits to represent exactly (further generalizing the recent lower bounds of Murray and the author). We also obtain "fixed-polynomial" lower bounds for functions in NP, for the first three representation classes. All our lower bounds are obtained via algorithms for analyzing linear combinations of simple functions in the above scenarios, in ways which substantially beat exhaustive search.

Richard Ryan Williams. Limits on Representing Boolean Functions by Linear Combinations of Simple Functions: Thresholds, ReLUs, and Low-Degree Polynomials. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 6:1-6:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)

Copy BibTex To Clipboard

@InProceedings{williams:LIPIcs.CCC.2018.6, author = {Williams, Richard Ryan}, title = {{Limits on Representing Boolean Functions by Linear Combinations of Simple Functions: Thresholds, ReLUs, and Low-Degree Polynomials}}, booktitle = {33rd Computational Complexity Conference (CCC 2018)}, pages = {6:1--6:24}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-069-9}, ISSN = {1868-8969}, year = {2018}, volume = {102}, editor = {Servedio, Rocco A.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.6}, URN = {urn:nbn:de:0030-drops-88846}, doi = {10.4230/LIPIcs.CCC.2018.6}, annote = {Keywords: linear threshold functions, lower bounds, neural networks, low-degree polynomials} }

Document

**Published in:** LIPIcs, Volume 50, 31st Conference on Computational Complexity (CCC 2016)

We present an efficient proof system for Multipoint Arithmetic Circuit Evaluation: for every arithmetic circuit C(x_1,...,x_n) of size s and degree d over a field F, and any inputs a_1,...,a_K in F}^n,
- the Prover sends the Verifier the values C(a_1), ..., C(a_K) in F and a proof of ~O(K * d) length, and
- the Verifier tosses poly(log(dK|F|epsilon)) coins and can check the proof in about ~O}(K * (n + d) + s) time, with probability of error less than epsilon.
For small degree d, this "Merlin-Arthur" proof system (a.k.a. MA-proof system) runs in nearly-linear time, and has many applications. For example, we obtain MA-proof systems that run in c^{n} time (for various c < 2) for the Permanent, #Circuit-SAT for all sublinear-depth circuits, counting Hamiltonian cycles, and infeasibility of 0-1 linear programs. In general, the value of any polynomial in Valiant's class VP can be certified faster than "exhaustive summation" over all possible assignments. These results strongly refute a Merlin-Arthur Strong ETH and Arthur-Merlin Strong ETH posed by Russell Impagliazzo and others.
We also give a three-round (AMA) proof system for quantified Boolean formulas running in 2^{2n/3+o(n)} time, nearly-linear time MA-proof systems for counting orthogonal vectors in a collection and finding Closest Pairs in the Hamming metric, and a MA-proof system running in n^{k/2+O(1)}-time for counting k-cliques in graphs.
We point to some potential future directions for refuting the Nondeterministic Strong ETH.

Richard Ryan Williams. Strong ETH Breaks With Merlin and Arthur: Short Non-Interactive Proofs of Batch Evaluation. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 2:1-2:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)

Copy BibTex To Clipboard

@InProceedings{williams:LIPIcs.CCC.2016.2, author = {Williams, Richard Ryan}, title = {{Strong ETH Breaks With Merlin and Arthur: Short Non-Interactive Proofs of Batch Evaluation}}, booktitle = {31st Conference on Computational Complexity (CCC 2016)}, pages = {2:1--2:17}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-008-8}, ISSN = {1868-8969}, year = {2016}, volume = {50}, editor = {Raz, Ran}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.2}, URN = {urn:nbn:de:0030-drops-58307}, doi = {10.4230/LIPIcs.CCC.2016.2}, annote = {Keywords: counting complexity, exponential-time hypothesis, interactive proofs, Merlin-Arthur games} }

Document

Invited Talk

**Published in:** LIPIcs, Volume 29, 34th International Conference on Foundation of Software Technology and Theoretical Computer Science (FSTTCS 2014)

In circuit complexity, the polynomial method is a general approach to proving circuit lower bounds in restricted settings. One shows that functions computed by sufficiently restricted circuits are "correlated" in some way with a low-complexity polynomial, where complexity may be measured by the degree of the polynomial or the number of monomials. Then, results limiting the capabilities of low-complexity polynomials are extended to the restricted circuits.
Old theorems proved by this method have recently found interesting applications to the design of algorithms for basic problems in the theory of computing. This paper surveys some of these applications, and gives a few new ones.

Richard Ryan Williams. The Polynomial Method in Circuit Complexity Applied to Algorithm Design (Invited Talk). In 34th International Conference on Foundation of Software Technology and Theoretical Computer Science (FSTTCS 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 29, pp. 47-60, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)

Copy BibTex To Clipboard

@InProceedings{williams:LIPIcs.FSTTCS.2014.47, author = {Williams, Richard Ryan}, title = {{The Polynomial Method in Circuit Complexity Applied to Algorithm Design}}, booktitle = {34th International Conference on Foundation of Software Technology and Theoretical Computer Science (FSTTCS 2014)}, pages = {47--60}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-77-4}, ISSN = {1868-8969}, year = {2014}, volume = {29}, editor = {Raman, Venkatesh and Suresh, S. P.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2014.47}, URN = {urn:nbn:de:0030-drops-48328}, doi = {10.4230/LIPIcs.FSTTCS.2014.47}, annote = {Keywords: algorithm design, circuit complexity, polynomial method} }

Document

**Published in:** Dagstuhl Reports, Volume 13, Issue 5 (2023)

This report documents the program and the outcomes of Dagstuhl Seminar 23181 "Empirical Evaluation of Secure Development Processes". This was the second seminar on this subject. It brought together researchers and practitioners from the fields of software engineering, IT security and human factors, to discuss challenges and possible solutions with respect to empirically assessing secure engineering activities.

Eric Bodden, Sam Weber, and Laurie Williams. Empirical Evaluation of Secure Development Processes (Dagstuhl Seminar 23181). In Dagstuhl Reports, Volume 13, Issue 5, pp. 1-21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)

Copy BibTex To Clipboard

@Article{bodden_et_al:DagRep.13.5.1, author = {Bodden, Eric and Weber, Sam and Williams, Laurie}, title = {{Empirical Evaluation of Secure Development Processes (Dagstuhl Seminar 23181)}}, pages = {1--21}, journal = {Dagstuhl Reports}, ISSN = {2192-5283}, year = {2023}, volume = {13}, number = {5}, editor = {Bodden, Eric and Weber, Sam and Williams, Laurie}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.13.5.1}, URN = {urn:nbn:de:0030-drops-193636}, doi = {10.4230/DagRep.13.5.1}, annote = {Keywords: Empirical assessment, Secure development lifecycle} }

Document

**Published in:** Dagstuhl Reports, Volume 4, Issue 6 (2015)

This report documents the program and the outcomes of Dagstuhl Seminar 14261 "Software Development Analytics". We briefly summarize the goals and format of the seminar, the results of the break out groups, and a draft of a manifesto for software analytics. The report also includes the abstracts of the talks presented at the seminar.

Harald Gall, Tim Menzies, Laurie Williams, and Thomas Zimmermann. Software Development Analytics (Dagstuhl Seminar 14261). In Dagstuhl Reports, Volume 4, Issue 6, pp. 64-83, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)

Copy BibTex To Clipboard

@Article{gall_et_al:DagRep.4.6.64, author = {Gall, Harald and Menzies, Tim and Williams, Laurie and Zimmermann, Thomas}, title = {{Software Development Analytics (Dagstuhl Seminar 14261)}}, pages = {64--83}, journal = {Dagstuhl Reports}, ISSN = {2192-5283}, year = {2014}, volume = {4}, number = {6}, editor = {Gall, Harald and Menzies, Tim and Williams, Laurie and Zimmermann, Thomas}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.4.6.64}, URN = {urn:nbn:de:0030-drops-47638}, doi = {10.4230/DagRep.4.6.64}, annote = {Keywords: software development, data-driven decision making, analytics, empirical software engineering, mining software repositories, business intelligence, pre} }

Document

**Published in:** Dagstuhl Reports, Volume 7, Issue 1 (2017)

This report documents the program and the outcomes of Dagstuhl Seminar 17031 on "Planning and Robotics". The seminar was concerned with the synergy between the research areas of Automated Planning & Scheduling and Robotics. The motivation for this seminar was to bring together researchers from the two communities and people from the Industry in order to foster a broader interest in the integration of planning and deliberation approaches to sensory-motor functions in robotics. The first part of the seminar was dedicated to eight sessions composed on several topics in which attendees had the opportunity to present position statements. Then, the second part was composed by six panel sessions where attendees had the opportunity to further discuss the position statements and issues raised in previous sessions. The main outcomes were a greater common understanding of planning and robotics issues and challenges, and a greater appreciation of crossover between different perspectives, i.e., spanning from low level control to high-level cognitive approaches for autonomous robots. Different application domains were also discussed in which the deployment of planning and robotics methodologies and technologies constitute an added value.

Malik Ghallab, Nick Hawes, Daniele Magazzeni, Brian C. Williams, and Andrea Orlandini. Planning and Robotics (Dagstuhl Seminar 17031). In Dagstuhl Reports, Volume 7, Issue 1, pp. 32-73, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)

Copy BibTex To Clipboard

@Article{ghallab_et_al:DagRep.7.1.32, author = {Ghallab, Malik and Hawes, Nick and Magazzeni, Daniele and Williams, Brian C. and Orlandini, Andrea}, title = {{Planning and Robotics (Dagstuhl Seminar 17031)}}, pages = {32--73}, journal = {Dagstuhl Reports}, ISSN = {2192-5283}, year = {2017}, volume = {7}, number = {1}, editor = {Ghallab, Malik and Hawes, Nick and Magazzeni, Daniele and Williams, Brian C. and Orlandini, Andrea}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.7.1.32}, URN = {urn:nbn:de:0030-drops-72451}, doi = {10.4230/DagRep.7.1.32}, annote = {Keywords: adjustable autonomy, artificial intelligence, automated planning and scheduling, goal reasoning, human-robot interaction, plan execution, robotics} }

Document

**Published in:** Dagstuhl Seminar Proceedings, Volume 10451, Runtime Verification, Diagnosis, Planning and Control for Autonomous Systems (2011)

From November 7 to 12, 2010, the Dagstuhl Seminar 10451 ``Runtime Verification, Diagnosis, Planning and Control for Autonomous Systems'' was held in Schloss Dagstuhl~--~Leibniz Center for Informatics.
During the seminar, 35 participants presented their current
research and discussed ongoing work and open problems.
This document puts together abstracts of the presentations given during the seminar, and provides links to extended abstracts or full papers, if available.

Klaus Havelund, Martin Leucker, Martin Sachenbacher, Oleg Sokolsky, and Brian C. Williams. 10451 Abstracts Collection – Runtime Verification, Diagnosis, Planning and Control for Autonomous Systems. In Runtime Verification, Diagnosis, Planning and Control for Autonomous Systems. Dagstuhl Seminar Proceedings, Volume 10451, pp. 1-15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)

Copy BibTex To Clipboard

@InProceedings{havelund_et_al:DagSemProc.10451.1, author = {Havelund, Klaus and Leucker, Martin and Sachenbacher, Martin and Sokolsky, Oleg and Williams, Brian C.}, title = {{10451 Abstracts Collection – Runtime Verification, Diagnosis, Planning and Control for Autonomous Systems}}, booktitle = {Runtime Verification, Diagnosis, Planning and Control for Autonomous Systems}, pages = {1--15}, series = {Dagstuhl Seminar Proceedings (DagSemProc)}, ISSN = {1862-4405}, year = {2011}, volume = {10451}, editor = {Klaus Havelund and Martin Leucker and Martin Sachenbacher and Oleg Sokolsky and Brian C. Williams}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.10451.1}, URN = {urn:nbn:de:0030-drops-29487}, doi = {10.4230/DagSemProc.10451.1}, annote = {Keywords: Runtime Verification, Model-based Diagnosis, Planning, Control, Autonomous Systems} }

Document

**Published in:** Dagstuhl Seminar Proceedings, Volume 10451, Runtime Verification, Diagnosis, Planning and Control for Autonomous Systems (2011)

From November 7 to 12, 2010, the Dagstuhl Seminar 10451 'Runtime Verification, Diagnosis, Planning and Control for Autonomous Systems' was held in Schloss Dagstuhl – Leibniz Center for Informatics. During the seminar, 35 participants presented their current research and discussed ongoing work and open problems. This document puts together abstracts of the presentations given during the seminar, and provides links to extended abstracts or full papers, if available.

Klaus Havelund, Martin Leucker, Martin Sachenbacher, Oleg Sokolsky, and Brian C. Williams. 10451 Executive Summary – Runtime Verification, Diagnosis, Planning and Control for Autonomous Systems. In Runtime Verification, Diagnosis, Planning and Control for Autonomous Systems. Dagstuhl Seminar Proceedings, Volume 10451, pp. 1-4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)

Copy BibTex To Clipboard

@InProceedings{havelund_et_al:DagSemProc.10451.2, author = {Havelund, Klaus and Leucker, Martin and Sachenbacher, Martin and Sokolsky, Oleg and Williams, Brian C.}, title = {{10451 Executive Summary – Runtime Verification, Diagnosis, Planning and Control for Autonomous Systems}}, booktitle = {Runtime Verification, Diagnosis, Planning and Control for Autonomous Systems}, pages = {1--4}, series = {Dagstuhl Seminar Proceedings (DagSemProc)}, ISSN = {1862-4405}, year = {2011}, volume = {10451}, editor = {Klaus Havelund and Martin Leucker and Martin Sachenbacher and Oleg Sokolsky and Brian C. Williams}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.10451.2}, URN = {urn:nbn:de:0030-drops-29476}, doi = {10.4230/DagSemProc.10451.2}, annote = {Keywords: Runtime Verification, Model-based Diagnosis, Planning, Control, Autonomous Systems} }

Document

**Published in:** Dagstuhl Seminar Proceedings, Volume 10081, Cognitive Robotics (2010)

Robotic devices, such as rovers and autonomous
spacecraft, have been successfully controlled by plan
execution systems that use plans with temporal flexibility to
dynamically adapt to temporal disturbances. To date these
execution systems apply to discrete systems that abstract
away the detailed dynamic constraints of the controlled
device. To control dynamic, under-actuated devices, such
as agile bipedal walking machines, we extend this execution
paradigm to incorporate detailed dynamic constraints.
Building upon prior work on dispatchable plan execution,
we introduce a novel approach to flexible plan execution of
hybrid under-actuated systems that achieves robustness by
exploiting spatial as well as temporal plan flexibility. To
accomplish this, we first transform the high-dimensional
system into a set of low dimensional, weakly coupled
systems. Second, to coordinate these systems such that they
achieve the plan in real-time, we compile a plan into a
concurrent timed flow tube description. This description
represents all feasible control trajectories and their temporal
coordination constraints, such that each trajectory satisfies
all plan and dynamic constraints. Finally, the problem of
runtime plan dispatching is reduced to maintaining state
trajectories in their associated flow tubes, while satisfying
the coordination constraints. This is accomplished through
an efficient local search algorithm that adjusts a small
number of control parameters in real-time. The first step
has been published previously; this paper focuses on the last
two steps. The approach is validated on the execution of a
set of bipedal walking plans, using a high fidelity simulation
of a biped.

Andreas G. Hofmann and Brian C. Williams. Exploiting Spatial and Temporal Flexibility for Exploiting Spatial and Temporal Flexibility for Plan Execution of Hybrid, Under-actuated Systems. In Cognitive Robotics. Dagstuhl Seminar Proceedings, Volume 10081, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)

Copy BibTex To Clipboard

@InProceedings{hofmann_et_al:DagSemProc.10081.8, author = {Hofmann, Andreas G. and Williams, Brian C.}, title = {{Exploiting Spatial and Temporal Flexibility for Exploiting Spatial and Temporal Flexibility for Plan Execution of Hybrid, Under-actuated Systems}}, booktitle = {Cognitive Robotics}, pages = {1--8}, series = {Dagstuhl Seminar Proceedings (DagSemProc)}, ISSN = {1862-4405}, year = {2010}, volume = {10081}, editor = {Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.10081.8}, URN = {urn:nbn:de:0030-drops-27740}, doi = {10.4230/DagSemProc.10081.8}, annote = {Keywords: } }

Document

**Published in:** LIPIcs, Volume 248, 33rd International Symposium on Algorithms and Computation (ISAAC 2022)

The number of ordered trees (also known as plane trees) with n nodes is the (n-1)st Catalan number C_{n-1}. An ordered tree can be stored directly using nodes and pointers, or represented indirectly by a Dyck word. This paper presents a loopless algorithm for generating ordered trees with n nodes using pointer-based representations. In other words, we spend 𝒪(C_{n-1})-time to generate all of the trees, and moreover, the delay between consecutive trees is worst-case 𝒪(1)-time.
To achieve this run-time, each tree must differ from the previous by a constant amount. In other words, the algorithm must create a type of Gray code order. Our algorithm operates on the children of a node like a stack, by popping the first child off of one node’s stack and pushing the result onto another node’s stack. We refer to this pop-push operation as a pull, and consecutive trees in our order differ by one or two pulls. There is a simple two-case successor rule that determines the pulls to apply directly from the current tree. When converted to Dyck words, our rule corresponds to a left-shift, and these shift generate a cool-lex variant of lexicographic order.
Our results represent the first pull Gray code for ordered trees, and the first fully published loopless algorithm for ordered trees using pointer representations. More importantly, our algorithm is incredibly simple: A full implementation in C, including initialization and output, uses only three loops and three if-else blocks. Our work also establishes a simultaneous Gray code for Dyck words, ordered trees, and also binary trees, using cool-lex order.

Paul Lapey and Aaron Williams. Pop & Push: Ordered Tree Iteration in 𝒪(1)-Time. In 33rd International Symposium on Algorithms and Computation (ISAAC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 248, pp. 53:1-53:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{lapey_et_al:LIPIcs.ISAAC.2022.53, author = {Lapey, Paul and Williams, Aaron}, title = {{Pop \& Push: Ordered Tree Iteration in 𝒪(1)-Time}}, booktitle = {33rd International Symposium on Algorithms and Computation (ISAAC 2022)}, pages = {53:1--53:16}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-258-7}, ISSN = {1868-8969}, year = {2022}, volume = {248}, editor = {Bae, Sang Won and Park, Heejin}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2022.53}, URN = {urn:nbn:de:0030-drops-173380}, doi = {10.4230/LIPIcs.ISAAC.2022.53}, annote = {Keywords: combinatorial generation, Gray code, simultaneous Gray code, ordered trees, plane trees, Dyck words, binary trees, Catalan objects, loopless algorithm, cool-lex order} }

Document

**Published in:** LIPIcs, Volume 226, 11th International Conference on Fun with Algorithms (FUN 2022)

We study the space reachable by rolling a 3D convex polyhedron on a 2D periodic tessellation in the xy-plane, where at every step a face of the polyhedron must coincide exactly with a tile of the tessellation it rests upon, and the polyhedron rotates around one of the incident edges of that face until the neighboring face hits the xy plane. If the whole plane can be reached by a sequence of such rolls, we call the polyhedron a plane roller for the given tessellation. We further classify polyhedra that reach a constant fraction of the plane, an infinite area but vanishing fraction of the plane, or a bounded area as hollow-plane rollers, band rollers, and bounded rollers respectively. We present a polynomial-time algorithm to determine the set of tiles in a given periodic tessellation reachable by a given polyhedron from a given starting position, which in particular determines the roller type of the polyhedron and tessellation. Using this algorithm, we compute the reachability for every regular-faced convex polyhedron on every regular-tiled (≤ 4)-uniform tessellation.

Akira Baes, Erik D. Demaine, Martin L. Demaine, Elizabeth Hartung, Stefan Langerman, Joseph O'Rourke, Ryuhei Uehara, Yushi Uno, and Aaron Williams. Rolling Polyhedra on Tessellations. In 11th International Conference on Fun with Algorithms (FUN 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 226, pp. 6:1-6:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{baes_et_al:LIPIcs.FUN.2022.6, author = {Baes, Akira and Demaine, Erik D. and Demaine, Martin L. and Hartung, Elizabeth and Langerman, Stefan and O'Rourke, Joseph and Uehara, Ryuhei and Uno, Yushi and Williams, Aaron}, title = {{Rolling Polyhedra on Tessellations}}, booktitle = {11th International Conference on Fun with Algorithms (FUN 2022)}, pages = {6:1--6:16}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-232-7}, ISSN = {1868-8969}, year = {2022}, volume = {226}, editor = {Fraigniaud, Pierre and Uno, Yushi}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FUN.2022.6}, URN = {urn:nbn:de:0030-drops-159761}, doi = {10.4230/LIPIcs.FUN.2022.6}, annote = {Keywords: polyhedra, tilings} }

Document

**Published in:** LIPIcs, Volume 226, 11th International Conference on Fun with Algorithms (FUN 2022)

You provide us with a matroid and an initial base. We say that a subset of the bases "belongs to us" if we can visit each one via a sequence of base exchanges starting from the initial base. It is well-known that "All your base are belong to us". We refine this classic result by showing that it can be done by a simple greedy algorithm. For example, the spanning trees of a graph can be generated by edge exchanges using the following greedy rule: Minimize the larger label of an edge that enters or exits the current spanning tree and which creates a spanning tree that is new (i.e., hasn't been visited already). Amazingly, this works for any graph, for any labeling of its edges, for any initial spanning tree, and regardless of how you choose the edge with the smaller label in each exchange. Furthermore, by maintaining a small amount of information, we can generate each successive spanning tree without storing the previous trees.
In general, for any matroid, we can greedily compute a listing of all its bases matroid such that consecutive bases differ by a base exchange. Our base exchange Gray codes apply a prefix-exchange on a prefix-minor of the matroid, and we can generate these orders using "history-free" iterative algorithms. More specifically, we store O(m) bits of data, and use O(m) time per base assuming O(1) time independence and coindependence oracles.
Our work generalizes and extends a number of previous results. For example, the bases of the uniform matroid are combinations, and they belong to us using homogeneous transpositions via an Eades-McKay style order. Similarly, the spanning trees of fan graphs belong to us via face pivot Gray codes, which extends recent results of Cameron, Grubb, and Sawada [Pivot Gray Codes for the Spanning Trees of a Graph ft. the Fan, COCOON 2021].

Arturo Merino, Torsten Mütze, and Aaron Williams. All Your bases Are Belong to Us: Listing All Bases of a Matroid by Greedy Exchanges. In 11th International Conference on Fun with Algorithms (FUN 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 226, pp. 22:1-22:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{merino_et_al:LIPIcs.FUN.2022.22, author = {Merino, Arturo and M\"{u}tze, Torsten and Williams, Aaron}, title = {{All Your bases Are Belong to Us: Listing All Bases of a Matroid by Greedy Exchanges}}, booktitle = {11th International Conference on Fun with Algorithms (FUN 2022)}, pages = {22:1--22:28}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-232-7}, ISSN = {1868-8969}, year = {2022}, volume = {226}, editor = {Fraigniaud, Pierre and Uno, Yushi}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FUN.2022.22}, URN = {urn:nbn:de:0030-drops-159928}, doi = {10.4230/LIPIcs.FUN.2022.22}, annote = {Keywords: Matroids, base exchange, Gray codes, combinatorial generation, greedy algorithms, spanning trees} }

Document

**Published in:** LIPIcs, Volume 49, 8th International Conference on Fun with Algorithms (FUN 2016)

Mario is back! In this sequel, we prove that solving a generalized level of Super Mario Bros. is PSPACE-complete, strengthening the previous NP-hardness result (FUN 2014). Both our PSPACE-hardness and the previous NP-hardness use levels of arbitrary dimensions and require either arbitrarily large screens or a game engine that remembers the state of off-screen sprites. We also analyze the complexity of the less general case where the screen size is constant, the number of on-screen sprites is constant, and the game engine forgets the state of everything substantially off-screen, as in most, if not all, Super Mario Bros. video games. In this case we prove that the game is solvable in polynomial time, assuming levels are explicitly encoded; on the other hand, if levels can be represented using run-length encoding, then the problem is weakly NP-hard (even if levels have only constant height, as in the video games). All of our hardness proofs are also resilient to known glitches in Super Mario Bros., unlike the previous NP-hardness proof.

Erik D. Demaine, Giovanni Viglietta, and Aaron Williams. Super Mario Bros. is Harder/Easier Than We Thought. In 8th International Conference on Fun with Algorithms (FUN 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 49, pp. 13:1-13:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)

Copy BibTex To Clipboard

@InProceedings{demaine_et_al:LIPIcs.FUN.2016.13, author = {Demaine, Erik D. and Viglietta, Giovanni and Williams, Aaron}, title = {{Super Mario Bros. is Harder/Easier Than We Thought}}, booktitle = {8th International Conference on Fun with Algorithms (FUN 2016)}, pages = {13:1--13:14}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-005-7}, ISSN = {1868-8969}, year = {2016}, volume = {49}, editor = {Demaine, Erik D. and Grandoni, Fabrizio}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FUN.2016.13}, URN = {urn:nbn:de:0030-drops-58802}, doi = {10.4230/LIPIcs.FUN.2016.13}, annote = {Keywords: video games, computational complexity, PSPACE} }

Document

Track A: Algorithms, Complexity and Games

**Published in:** LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)

The diameter, radius and eccentricities are natural graph parameters. While these problems have been studied extensively, there are no known dynamic algorithms for them beyond the ones that follow from trivial recomputation after each update or from solving dynamic All-Pairs Shortest Paths (APSP), which is very computationally intensive. This is the situation for dynamic approximation algorithms as well, and even if only edge insertions or edge deletions need to be supported.
This paper provides a comprehensive study of the dynamic approximation of Diameter, Radius and Eccentricities, providing both conditional lower bounds, and new algorithms whose bounds are optimal under popular hypotheses in fine-grained complexity. Some of the highlights include:
- Under popular hardness hypotheses, there can be no significantly better fully dynamic approximation algorithms than recomputing the answer after each update, or maintaining full APSP.
- Nearly optimal partially dynamic (incremental/decremental) algorithms can be achieved via efficient reductions to (incremental/decremental) maintenance of Single-Source Shortest Paths. For instance, a nearly (3/2+epsilon)-approximation to Diameter in directed or undirected n-vertex, m-edge graphs can be maintained decrementally in total time m^{1+o(1)}sqrt{n}/epsilon^2. This nearly matches the static 3/2-approximation algorithm for the problem that is known to be conditionally optimal.

Bertie Ancona, Monika Henzinger, Liam Roditty, Virginia Vassilevska Williams, and Nicole Wein. Algorithms and Hardness for Diameter in Dynamic Graphs. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 13:1-13:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{ancona_et_al:LIPIcs.ICALP.2019.13, author = {Ancona, Bertie and Henzinger, Monika and Roditty, Liam and Williams, Virginia Vassilevska and Wein, Nicole}, title = {{Algorithms and Hardness for Diameter in Dynamic Graphs}}, booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)}, pages = {13:1--13:14}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-109-2}, ISSN = {1868-8969}, year = {2019}, volume = {132}, editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.13}, URN = {urn:nbn:de:0030-drops-105891}, doi = {10.4230/LIPIcs.ICALP.2019.13}, annote = {Keywords: fine-grained complexity, graph algorithms, dynamic algorithms} }

Document

Track A: Algorithms, Complexity and Games

**Published in:** LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)

We study fundamental graph parameters such as the Diameter and Radius in directed graphs, when distances are measured using a somewhat unorthodox but natural measure: the distance between u and v is the minimum of the shortest path distances from u to v and from v to u. The center node in a graph under this measure can for instance represent the optimal location for a hospital to ensure the fastest medical care for everyone, as one can either go to the hospital, or a doctor can be sent to help.
By computing All-Pairs Shortest Paths, all pairwise distances and thus the parameters we study can be computed exactly in O~(mn) time for directed graphs on n vertices, m edges and nonnegative edge weights. Furthermore, this time bound is tight under the Strong Exponential Time Hypothesis [Roditty-Vassilevska W. STOC 2013] so it is natural to study how well these parameters can be approximated in O(mn^{1-epsilon}) time for constant epsilon>0. Abboud, Vassilevska Williams, and Wang [SODA 2016] gave a polynomial factor approximation for Diameter and Radius, as well as a constant factor approximation for both problems in the special case where the graph is a DAG. We greatly improve upon these bounds by providing the first constant factor approximations for Diameter, Radius and the related Eccentricities problem in general graphs. Additionally, we provide a hierarchy of algorithms for Diameter that gives a time/accuracy trade-off.

Mina Dalirrooyfard, Virginia Vassilevska Williams, Nikhil Vyas, Nicole Wein, Yinzhan Xu, and Yuancheng Yu. Approximation Algorithms for Min-Distance Problems. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 46:1-46:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{dalirrooyfard_et_al:LIPIcs.ICALP.2019.46, author = {Dalirrooyfard, Mina and Williams, Virginia Vassilevska and Vyas, Nikhil and Wein, Nicole and Xu, Yinzhan and Yu, Yuancheng}, title = {{Approximation Algorithms for Min-Distance Problems}}, booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)}, pages = {46:1--46:14}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-109-2}, ISSN = {1868-8969}, year = {2019}, volume = {132}, editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.46}, URN = {urn:nbn:de:0030-drops-106223}, doi = {10.4230/LIPIcs.ICALP.2019.46}, annote = {Keywords: fine-grained complexity, graph algorithms, diameter, radius, eccentricities} }

Document

Track A: Algorithms, Complexity and Games

**Published in:** LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)

Some of the most fundamental and well-studied graph parameters are the Diameter (the largest shortest paths distance) and Radius (the smallest distance for which a "center" node can reach all other nodes). The natural and important ST-variant considers two subsets S and T of the vertex set and lets the ST-diameter be the maximum distance between a node in S and a node in T, and the ST-radius be the minimum distance for a node of S to reach all nodes of T. The bichromatic variant is the special case in which S and T partition the vertex set.
In this paper we present a comprehensive study of the approximability of ST and Bichromatic Diameter, Radius, and Eccentricities, and variants, in graphs with and without directions and weights. We give the first nontrivial approximation algorithms for most of these problems, including time/accuracy trade-off upper and lower bounds. We show that nearly all of our obtained bounds are tight under the Strong Exponential Time Hypothesis (SETH), or the related Hitting Set Hypothesis.
For instance, for Bichromatic Diameter in undirected weighted graphs with m edges, we present an O~(m^{3/2}) time 5/3-approximation algorithm, and show that under SETH, neither the running time, nor the approximation factor can be significantly improved while keeping the other unchanged.

Mina Dalirrooyfard, Virginia Vassilevska Williams, Nikhil Vyas, and Nicole Wein. Tight Approximation Algorithms for Bichromatic Graph Diameter and Related Problems. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 47:1-47:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{dalirrooyfard_et_al:LIPIcs.ICALP.2019.47, author = {Dalirrooyfard, Mina and Williams, Virginia Vassilevska and Vyas, Nikhil and Wein, Nicole}, title = {{Tight Approximation Algorithms for Bichromatic Graph Diameter and Related Problems}}, booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)}, pages = {47:1--47:15}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-109-2}, ISSN = {1868-8969}, year = {2019}, volume = {132}, editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.47}, URN = {urn:nbn:de:0030-drops-106238}, doi = {10.4230/LIPIcs.ICALP.2019.47}, annote = {Keywords: approximation algorithms, fine-grained complexity, diameter, radius, eccentricities} }

Document

**Published in:** LIPIcs, Volume 101, 16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018)

Since the introduction of retroactive data structures at SODA 2004, a major unsolved problem has been to bound the gap between the best partially retroactive data structure (where changes can be made to the past, but only the present can be queried) and the best fully retroactive data structure (where the past can also be queried) for any problem. It was proved in 2004 that any partially retroactive data structure with operation time T_{op}(n,m) can be transformed into a fully retroactive data structure with operation time O(sqrt{m} * T_{op}(n,m)), where n is the size of the data structure and m is the number of operations in the timeline [Demaine et al., 2004]. But it has been open for 14 years whether such a gap is necessary.
In this paper, we prove nearly matching upper and lower bounds on this gap for all n and m. We improve the upper bound for n << sqrt m by showing a new transformation with multiplicative overhead n log m. We then prove a lower bound of min {n log m, sqrt m}^{1-o(1)} assuming any of the following conjectures:
- Conjecture I: Circuit SAT requires 2^{n - o(n)} time on n-input circuits of size 2^{o(n)}. This conjecture is far weaker than the well-believed SETH conjecture from complexity theory, which asserts that CNF SAT with n variables and O(n) clauses already requires 2^{n-o(n)} time.
- Conjecture II: Online (min,+) product between an integer n x n matrix and n vectors requires n^{3 - o(1)} time. This conjecture is weaker than the APSP conjectures widely used in fine-grained complexity.
- Conjecture III (3-SUM Conjecture): Given three sets A,B,C of integers, each of size n, deciding whether there exist a in A, b in B, c in C such that a + b + c = 0 requires n^{2 - o(1)} time. This 1995 conjecture [Anka Gajentaan and Mark H. Overmars, 1995] was the first conjecture in fine-grained complexity.
Our lower bound construction illustrates an interesting power of fully retroactive queries: they can be used to quickly solve batched pair evaluation. We believe this technique can prove useful for other data structure lower bounds, especially dynamic ones.

Lijie Chen, Erik D. Demaine, Yuzhou Gu, Virginia Vassilevska Williams, Yinzhan Xu, and Yuancheng Yu. Nearly Optimal Separation Between Partially and Fully Retroactive Data Structures. In 16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 101, pp. 33:1-33:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)

Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.SWAT.2018.33, author = {Chen, Lijie and Demaine, Erik D. and Gu, Yuzhou and Williams, Virginia Vassilevska and Xu, Yinzhan and Yu, Yuancheng}, title = {{Nearly Optimal Separation Between Partially and Fully Retroactive Data Structures}}, booktitle = {16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018)}, pages = {33:1--33:12}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-068-2}, ISSN = {1868-8969}, year = {2018}, volume = {101}, editor = {Eppstein, David}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2018.33}, URN = {urn:nbn:de:0030-drops-88593}, doi = {10.4230/LIPIcs.SWAT.2018.33}, annote = {Keywords: retroactive data structure, conditional lower bound} }

Document

**Published in:** LIPIcs, Volume 202, 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)

We introduce three general compositionality criteria over operational semantics and prove that, when all three are satisfied together, they guarantee weak bisimulation being a congruence. Our work is founded upon Turi and Plotkin’s mathematical operational semantics and the coalgebraic approach to weak bisimulation by Brengos. We demonstrate each criterion with various examples of success and failure and establish a formal connection with the simply WB cool rule format of Bloom and van Glabbeek. In addition, we show that the three criteria induce lax models in the sense of Bonchi et al.

Stelios Tsampas, Christian Williams, Andreas Nuyts, Dominique Devriese, and Frank Piessens. Abstract Congruence Criteria for Weak Bisimilarity. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 88:1-88:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)

Copy BibTex To Clipboard

@InProceedings{tsampas_et_al:LIPIcs.MFCS.2021.88, author = {Tsampas, Stelios and Williams, Christian and Nuyts, Andreas and Devriese, Dominique and Piessens, Frank}, title = {{Abstract Congruence Criteria for Weak Bisimilarity}}, booktitle = {46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)}, pages = {88:1--88:23}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-201-3}, ISSN = {1868-8969}, year = {2021}, volume = {202}, editor = {Bonchi, Filippo and Puglisi, Simon J.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.88}, URN = {urn:nbn:de:0030-drops-145281}, doi = {10.4230/LIPIcs.MFCS.2021.88}, annote = {Keywords: Structural Operational Semantics, distributive laws, weak bisimilarity} }

Document

**Published in:** LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)

Priority queues with parallel access are an attractive data structure for applications like prioritized online scheduling, discrete event simulation, or greedy algorithms. However, a classical priority queue constitutes a severe bottleneck in this context, leading to very small throughput. Hence, there has been significant interest in concurrent priority queues with relaxed semantics. We investigate the complementary quality criteria rank error (how close are deleted elements to the global minimum) and delay (for each element x, how many elements with lower priority are deleted before x). In this paper, we introduce MultiQueues as a natural approach to relaxed priority queues based on multiple sequential priority queues. Their naturally high theoretical scalability is further enhanced by using three orthogonal ways of batching operations on the sequential queues. Experiments indicate that MultiQueues present a very good performance-quality tradeoff and considerably outperform competing approaches in at least one of these aspects.
We employ a seemingly paradoxical technique of "wait-free locking" that might be of more general interest to convert sequential data structures to relaxed concurrent data structures.

Marvin Williams, Peter Sanders, and Roman Dementiev. Engineering MultiQueues: Fast Relaxed Concurrent Priority Queues. In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 81:1-81:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)

Copy BibTex To Clipboard

@InProceedings{williams_et_al:LIPIcs.ESA.2021.81, author = {Williams, Marvin and Sanders, Peter and Dementiev, Roman}, title = {{Engineering MultiQueues: Fast Relaxed Concurrent Priority Queues}}, booktitle = {29th Annual European Symposium on Algorithms (ESA 2021)}, pages = {81:1--81:17}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-204-4}, ISSN = {1868-8969}, year = {2021}, volume = {204}, editor = {Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.81}, URN = {urn:nbn:de:0030-drops-146627}, doi = {10.4230/LIPIcs.ESA.2021.81}, annote = {Keywords: concurrent data structure, priority queues, randomized algorithms, wait-free locking} }

X

Feedback for Dagstuhl Publishing

Feedback submitted

Please try again later or send an E-mail