LIPIcs, Volume 360

45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)



Thumbnail PDF

Event

Editors

C. Aiswarya
  • Chennai Mathematical Institute, India
  • IRL ReLaX, CNRS, France
Ruta Mehta
  • University of Illinois at Urbana-Champaign, IL, USA
Subhajit Roy
  • Indian Institute of Technology Kanpur, India

Publication Details

  • published at: 2025-12-09
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-406-2

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 360, FSTTCS 2025, Complete Volume

Authors: C. Aiswarya, Ruta Mehta, and Subhajit Roy


Abstract
LIPIcs, Volume 360, FSTTCS 2025, Complete Volume

Cite as

45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 1-970, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Proceedings{aiswarya_et_al:LIPIcs.FSTTCS.2025,
  title =	{{LIPIcs, Volume 360, FSTTCS 2025, Complete Volume}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{1--970},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025},
  URN =		{urn:nbn:de:0030-drops-252445},
  doi =		{10.4230/LIPIcs.FSTTCS.2025},
  annote =	{Keywords: LIPIcs, Volume 360, FSTTCS 2025, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: C. Aiswarya, Ruta Mehta, and Subhajit Roy


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 0:i-0:xx, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{aiswarya_et_al:LIPIcs.FSTTCS.2025.0,
  author =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{0:i--0:xx},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.0},
  URN =		{urn:nbn:de:0030-drops-252430},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Quantum Circuit Verification - A Potential Roadmap (Invited Talk)

Authors: Parosh Aziz Abdulla, Yu-Fang Chen, Michal Hečko, Lukáš Holík, Ondřej Lengál, Jyun-Ao Lin, and Ramanathan Thinniyam Srinivasan


Abstract
Quantum technologies are progressing at an extraordinary pace and are poised to transform numerous sectors both nationally and globally. Among them, quantum computing stands out for its potential to revolutionize areas such as cryptography, optimization, and the simulation of quantum systems, offering dramatic speed-ups for specific classes of problems. As quantum devices evolve and become increasingly pervasive, guaranteeing their correctness is of paramount importance. This necessitates the development of rigorous methods and tools to analyze and verify their behavior. However, the construction of such verification frameworks presents fundamental challenges. Quantum phenomena such as superposition and entanglement give rise to computational behaviors that differ profoundly from those of classical systems, leading to inherently probabilistic models and exponentially large state spaces, even for relatively small programs. Addressing these challenges requires building on the extensive expertise of the formal methods community in classical program verification, while incorporating recent advances and collaborative efforts in quantum systems. An interesting challenge for the verification community is to design and implement novel verification frameworks that transfer the key strengths of classical verification, such as expressive specification, precise error detection, automation, and scalability, to the quantum domain. We expect that the results of this research will play a crucial role in enabling the dependable deployment of quantum technologies across a wide range of future applications.

Cite as

Parosh Aziz Abdulla, Yu-Fang Chen, Michal Hečko, Lukáš Holík, Ondřej Lengál, Jyun-Ao Lin, and Ramanathan Thinniyam Srinivasan. Quantum Circuit Verification - A Potential Roadmap (Invited Talk). In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 1:1-1:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{abdulla_et_al:LIPIcs.FSTTCS.2025.1,
  author =	{Abdulla, Parosh Aziz and Chen, Yu-Fang and He\v{c}ko, Michal and Hol{\'\i}k, Luk\'{a}\v{s} and Leng\'{a}l, Ond\v{r}ej and Lin, Jyun-Ao and Srinivasan, Ramanathan Thinniyam},
  title =	{{Quantum Circuit Verification - A Potential Roadmap}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{1:1--1:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.1},
  URN =		{urn:nbn:de:0030-drops-250806},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.1},
  annote =	{Keywords: Quantum Circuits, Quantum Computing, Program Verification, Automata, Model Checking}
}
Document
Invited Talk
Unboundedness Problems for Formal Languages (Invited Talk)

Authors: Georg Zetzsche


Abstract
Informally, unboundedness problems are decision problems that ask about the existence of infinitely many words (satisfying certain properties) in a formal language. For example: Is a given language infinite? Or: Does a given language have super-polynomial growth? These came into focus in recent years because of their connections to downward closure computation and separability problems. Although unboundedness problems may seem difficult at first, it turns out that there are techniques that are at the same time conceptually very simple, but also apply to a surprisingly wide variety of language classes. The talk will survey recent results (and techniques) concerning unboundedness problems.

Cite as

Georg Zetzsche. Unboundedness Problems for Formal Languages (Invited Talk). In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 2:1-2:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{zetzsche:LIPIcs.FSTTCS.2025.2,
  author =	{Zetzsche, Georg},
  title =	{{Unboundedness Problems for Formal Languages}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{2:1--2:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.2},
  URN =		{urn:nbn:de:0030-drops-250810},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.2},
  annote =	{Keywords: Decidability, formal languages, unifying frameworks, downward closure, separability}
}
Document
PDQMA = DQMA = NEXP: QMA with Hidden Variables and Non-Collapsing Measurements

Authors: Scott Aaronson, Sabee Grewal, Vishnu Iyer, Simon C. Marshall, and Ronak Ramachandran


Abstract
We define and study a variant of QMA (Quantum Merlin Arthur) in which Arthur can make multiple non-collapsing measurements to Merlin’s witness state, in addition to ordinary collapsing measurements. By analogy to the class PDQP defined by Aaronson, Bouland, Fitzsimons, and Lee (2014), we call this class PDQMA. Our main result is that PDQMA = NEXP; this result builds on the PCP theorem and complements the result of Aaronson (2018) that PDQP/qpoly = ALL. While the result has little to do with quantum mechanics, we also show a more "quantum" result: namely, that QMA with the ability to inspect the entire history of a hidden variable is equal to NEXP, under mild assumptions on the hidden-variable theory. We also observe that a quantum computer, augmented with quantum advice and the ability to inspect the history of a hidden variable, can solve any decision problem in polynomial time.

Cite as

Scott Aaronson, Sabee Grewal, Vishnu Iyer, Simon C. Marshall, and Ronak Ramachandran. PDQMA = DQMA = NEXP: QMA with Hidden Variables and Non-Collapsing Measurements. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 3:1-3:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{aaronson_et_al:LIPIcs.FSTTCS.2025.3,
  author =	{Aaronson, Scott and Grewal, Sabee and Iyer, Vishnu and Marshall, Simon C. and Ramachandran, Ronak},
  title =	{{PDQMA = DQMA = NEXP: QMA with Hidden Variables and Non-Collapsing Measurements}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{3:1--3:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.3},
  URN =		{urn:nbn:de:0030-drops-250828},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.3},
  annote =	{Keywords: quantum Merlin-Arthur, non-collapsing measurements, hidden-variable theories}
}
Document
How Pinball Wizards Simulate a Turing Machine

Authors: Rosemary U. Adejoh, Andreas Jakoby, Sneha Mohanty, and Christian Schindelhauer


Abstract
We introduce and investigate the computational complexity of a novel physical problem known as the Pinball Wizard problem. It involves an idealized pinball moving through a maze composed of one-way gates (outswing doors), plane walls, parabolic walls, moving plane walls, and bumpers that cause acceleration or deceleration. Given the initial position and velocity of the pinball, the task is to decide whether it will hit a specified target point. By simulating a two-stack pushdown automaton, we show that the problem is Turing-complete - even in two-dimensional space. In our construction, each step of the automaton corresponds to a constant number of reflections. Thus, deciding the Pinball Wizard problem is at least as hard as the Halting problem. Furthermore, our construction allows bumpers to be replaced with moving walls. In this case, even a ball moving at constant speed - a so-called ray particle - can be used, demonstrating that the Ray Particle Tracing problem is also Turing-complete.

Cite as

Rosemary U. Adejoh, Andreas Jakoby, Sneha Mohanty, and Christian Schindelhauer. How Pinball Wizards Simulate a Turing Machine. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 4:1-4:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{adejoh_et_al:LIPIcs.FSTTCS.2025.4,
  author =	{Adejoh, Rosemary U. and Jakoby, Andreas and Mohanty, Sneha and Schindelhauer, Christian},
  title =	{{How Pinball Wizards Simulate a Turing Machine}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{4:1--4:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.4},
  URN =		{urn:nbn:de:0030-drops-250832},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.4},
  annote =	{Keywords: Pinball Wizard problem, Halting problem, Turing-complete}
}
Document
Distributed Games with a Central Decision Maker

Authors: Bharat Adsul and Nehul Jain


Abstract
We study distributed games played on non-deterministic asynchronous automata which feature a central decision maker process that participates in all key decision making tasks. In these partial-information games, processes use their causal past to respond to scheduling choices made by the scheduler and cooperatively strategize as a team to achieve the winning objective. We show that the problem of deciding the existence of a distributed winning strategy is efficiently solvable for global safety and local parity objectives. We provide algorithmic solutions that match their computational hardness. We formulate the notion of a finite-state distributed strategy which allows to quantify its distributed memory requirements. For the aforementioned objectives, we establish that finite-state distributed winning strategies always exist. In fact, we provide novel constructions of such winning strategies which are shown to have almost optimal amount of distributed memory. We also show that a natural extension of the model with two decision making processes is undecidable.

Cite as

Bharat Adsul and Nehul Jain. Distributed Games with a Central Decision Maker. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 5:1-5:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{adsul_et_al:LIPIcs.FSTTCS.2025.5,
  author =	{Adsul, Bharat and Jain, Nehul},
  title =	{{Distributed Games with a Central Decision Maker}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{5:1--5:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.5},
  URN =		{urn:nbn:de:0030-drops-250843},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.5},
  annote =	{Keywords: Mazurkiewicz traces, models of concurrency, distributed synthesis, game-theoretic models, asynchronous automata, distributed decision-making}
}
Document
A Correct by Construction Fault Tolerant Voter for Input Selection of a Control System

Authors: Arif Ali AP, Jasine Babu, and Deepa Sara John


Abstract
Safety-critical systems use redundant input units to improve their reliability and fault tolerance. A voting logic is then used to select a reliable input from the redundant sources. A fault detection and isolation rules help in selecting input units that can participate in voting. This work deals with the formal requirement formulation, design, verification and synthesis of a generic voting unit for an N-modular redundant measurement system used for control applications in avionics systems. The work follows a correct-by-construction approach, using the Rocq theorem prover.

Cite as

Arif Ali AP, Jasine Babu, and Deepa Sara John. A Correct by Construction Fault Tolerant Voter for Input Selection of a Control System. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 6:1-6:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{aliap_et_al:LIPIcs.FSTTCS.2025.6,
  author =	{Ali AP, Arif and Babu, Jasine and John, Deepa Sara},
  title =	{{A Correct by Construction Fault Tolerant Voter for Input Selection of a Control System}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{6:1--6:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.6},
  URN =		{urn:nbn:de:0030-drops-250886},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.6},
  annote =	{Keywords: Fault Tolerant System Design, Formal Verification, Correct by Construction, Input Selection, Interactive Theorem Proving}
}
Document
Quantum Protocols for Rabin Oblivious Transfer

Authors: Erika Andersson, Akshay Bansal, James T. Peat, Jamie Sikora, and Jiawei Wu


Abstract
Rabin oblivious transfer is the cryptographic task where Alice wishes to receive a bit from Bob but it may get lost with probability 1/2. In this work, we provide protocol designs which yield quantum protocols with improved security. Moreover, we provide a constant lower bound on any quantum protocol for Rabin oblivious transfer. To quantify the security of this task with asymmetric cheating definitions, we introduce the notion of cheating advantage which may be of independent interest in the study of other asymmetric cryptographic primitives.

Cite as

Erika Andersson, Akshay Bansal, James T. Peat, Jamie Sikora, and Jiawei Wu. Quantum Protocols for Rabin Oblivious Transfer. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 7:1-7:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{andersson_et_al:LIPIcs.FSTTCS.2025.7,
  author =	{Andersson, Erika and Bansal, Akshay and Peat, James T. and Sikora, Jamie and Wu, Jiawei},
  title =	{{Quantum Protocols for Rabin Oblivious Transfer}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{7:1--7:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.7},
  URN =		{urn:nbn:de:0030-drops-250866},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.7},
  annote =	{Keywords: quantum cryptography, oblivious transfer, information-theoretic security}
}
Document
Parameterized Verification of Timed Networks with Clock Invariants

Authors: Étienne André, Swen Jacobs, Shyam Lal Karra, and Ocan Sankur


Abstract
We consider parameterized verification problems for networks of timed automata (TAs) based on different communication primitives. To this end, we first consider disjunctive timed networks (DTNs), i.e., networks of TAs that communicate via location guards that enable a transition only if there is another process in a certain location. We solve for the first time the case with unrestricted clock invariants, and establish that the parameterized model checking problem (PMCP) over finite local traces can be reduced to the corresponding model checking problem on a single TA. Moreover, we prove that the PMCP for networks that communicate via lossy broadcast can be reduced to the PMCP for DTNs. Finally, we show that for networks with k-wise synchronization, and therefore also for timed Petri nets, location reachability can be reduced to location reachability in DTNs. As a consequence we can answer positively the open problem from Abdulla et al. (2018) whether the universal safety problem for timed Petri nets with multiple clocks is decidable.

Cite as

Étienne André, Swen Jacobs, Shyam Lal Karra, and Ocan Sankur. Parameterized Verification of Timed Networks with Clock Invariants. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 8:1-8:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{andre_et_al:LIPIcs.FSTTCS.2025.8,
  author =	{Andr\'{e}, \'{E}tienne and Jacobs, Swen and Karra, Shyam Lal and Sankur, Ocan},
  title =	{{Parameterized Verification of Timed Networks with Clock Invariants}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{8:1--8:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.8},
  URN =		{urn:nbn:de:0030-drops-250878},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.8},
  annote =	{Keywords: Networks of Timed Automata, Parameterized Verification, Timed Petri Nets}
}
Document
ε-Stationary Nash Equilibria in Multi-Player Stochastic Graph Games

Authors: Ali Asadi, Léonard Brice, Krishnendu Chatterjee, and K. S. Thejaswini


Abstract
A strategy profile in a multi-player game is a Nash equilibrium if no player can unilaterally deviate to achieve a strictly better payoff. A profile is an ε-Nash equilibrium if no player can gain more than ε by unilaterally deviating from their strategy. In this work, we use ε-Nash equilibria to approximate the computation of Nash equilibria. Specifically, we focus on turn-based, multiplayer stochastic games played on graphs, where players are restricted to stationary strategies - strategies that use randomness but not memory. The problem of deciding the constrained existence of stationary Nash equilibria - where each player’s payoff must lie within a given interval - is known to be ∃ℝ-complete in such a setting (Hansen and Sølvsten, 2020). We extend this line of work to stationary ε-Nash equilibria and present an algorithm that solves the following promise problem: given a game with a Nash equilibrium satisfying the constraints, compute an ε-Nash equilibrium that ε-satisfies those same constraints - satisfies the constraints up to an ε additive error. Our algorithm runs in FNP^NP time. To achieve this, we first show that if a constrained Nash equilibrium exists, then one exists where the non-zero probabilities are at least an inverse of a double-exponential in the input. We further prove that such a strategy can be encoded using floating-point representations, as in the work of Frederiksen and Miltersen (2013), which finally gives us our FNP^NP algorithm. We further show that the decision version of the promise problem is NP-hard. Finally, we show a partial tightness result by proving a lower bound for such techniques: if a constrained Nash equilibrium exists, then there must be one where the probabilities in the strategies are double-exponentially small.

Cite as

Ali Asadi, Léonard Brice, Krishnendu Chatterjee, and K. S. Thejaswini. ε-Stationary Nash Equilibria in Multi-Player Stochastic Graph Games. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 9:1-9:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{asadi_et_al:LIPIcs.FSTTCS.2025.9,
  author =	{Asadi, Ali and Brice, L\'{e}onard and Chatterjee, Krishnendu and Thejaswini, K. S.},
  title =	{{\epsilon-Stationary Nash Equilibria in Multi-Player Stochastic Graph Games}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{9:1--9:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.9},
  URN =		{urn:nbn:de:0030-drops-250897},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.9},
  annote =	{Keywords: Nash Equilibria, \epsilon-Nash equilibria, Approximation, Existential Theory of Reals}
}
Document
Cat Herding Game Played on Infinite Trees

Authors: Rylo Ashmore and Sophie Pinchinat


Abstract
The game of Cat Herding is played on a graph between two players, the cat and the herder. The game setup consists of the cat choosing a starting vertex for their cat token. Then, both players alternate turns, beginning with the herder: they delete (any) one edge, called a cut, and the cat moves along a path to a new vertex. While this game has been studied on finite graph arenas regarding how optimally herder wins, we shift our attention to an infinite version of the game where the cat may now survive indefinitely. We show that cat winning positions in an infinite tree can be characterized by a second-order monadic statement, also amounting to having a complete infinite binary tree minor, or having uncountably many distinct rays. We take advantage of the logical characterization of cat winning positions to generalize a measure known as the cat number, to ordinals.

Cite as

Rylo Ashmore and Sophie Pinchinat. Cat Herding Game Played on Infinite Trees. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 10:1-10:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ashmore_et_al:LIPIcs.FSTTCS.2025.10,
  author =	{Ashmore, Rylo and Pinchinat, Sophie},
  title =	{{Cat Herding Game Played on Infinite Trees}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{10:1--10:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.10},
  URN =		{urn:nbn:de:0030-drops-250902},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.10},
  annote =	{Keywords: Pursuit-evasion games, Cat Herding, Cat number, Infinite trees, Monadic Second Order Logic, Ordinals}
}
Document
Stability Notions for Hospital Residents with Sizes

Authors: Haricharan Balasundaram, J. B. Krishnashree, Girija Limaye, and Meghana Nasre


Abstract
The Hospital Residents problem with sizes (HRS) is a generalisation of the well-studied hospital residents (HR) problem. In the HRS problem, an agent a has a size s(a) and the agent occupies s(a) many positions of the hospital h when assigned to h. The notion of stability in this setting is suitably modified, and it is known that deciding whether an HRS instance admits a stable matching is NP-hard under severe restrictions. In this work, we explore a variation of stability, which we term occupancy-based stability. This notion was defined by McDermid and Manlove (J. of Comb. Opt. 2010) but remained unexplored to the best of our knowledge. In our work, we show that every HRS instance admits an occupancy-stable matching. We further show that computing a maximum-size occupancy-stable matching is NP-hard. We complement our hardness result by providing an approximation algorithm with a guarantee strictly better than 3 for the max-size occupancy-stable matching problem. Given that the classical notion of stability adapted for HRS is not guaranteed to exist in general, we show a practical restriction under which a stable matching is guaranteed to exist. We present an efficient algorithm to output a stable matching in the restricted HRS instances. We also provide an alternate NP-hardness proof for the decision version of the stable matching problem for HRS which imposes a severe restriction on the number of neighbours of non-unit sized agents.

Cite as

Haricharan Balasundaram, J. B. Krishnashree, Girija Limaye, and Meghana Nasre. Stability Notions for Hospital Residents with Sizes. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 11:1-11:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{balasundaram_et_al:LIPIcs.FSTTCS.2025.11,
  author =	{Balasundaram, Haricharan and Krishnashree, J. B. and Limaye, Girija and Nasre, Meghana},
  title =	{{Stability Notions for Hospital Residents with Sizes}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{11:1--11:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.11},
  URN =		{urn:nbn:de:0030-drops-250914},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.11},
  annote =	{Keywords: Stable matchings, Hospital Residents with sizes, Approximation algorithms, NP-hardness}
}
Document
Explorability in Pushdown Automata

Authors: Ayaan Bedi and Karoliina Lehtinen


Abstract
We study explorability, a measure of nondeterminism in pushdown automata, which generalises history-determinism. An automaton is k-explorable if, while reading the input, it suffices to follow k concurrent runs, built step-by-step based only on the input seen so far, to construct an accepting one, if it exists. We show that the class of explorable PDAs lies strictly between history-deterministic and fully nondeterministic PDAs in terms of both expressiveness and succinctness. In fact increasing explorability induces an infinite hierarchy: each level k defines a strictly more expressive class than level k-1, yet the entire class remains less expressive than general nondeterministic PDAs. We then introduce a parameterized notion of explorability, where the number of runs may depend on input length, and show that exponential explorability precisely captures the context-free languages. Finally, we prove that explorable PDAs can be doubly exponentially more succinct than history-deterministic ones, and that the succinctness gap between deterministic and 2-explorable PDAs is not recursively enumerable. These results position explorability as a robust and operationally meaningful measure of nondeterminism for pushdown systems.

Cite as

Ayaan Bedi and Karoliina Lehtinen. Explorability in Pushdown Automata. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 12:1-12:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bedi_et_al:LIPIcs.FSTTCS.2025.12,
  author =	{Bedi, Ayaan and Lehtinen, Karoliina},
  title =	{{Explorability in Pushdown Automata}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{12:1--12:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.12},
  URN =		{urn:nbn:de:0030-drops-250921},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.12},
  annote =	{Keywords: Pushdown automata, nondeterminism, explorability, history-determinism}
}
Document
Synthesising Full-Information Protocols

Authors: Dietmar Berwanger, Laurent Doyen, and Thomas Soullard


Abstract
We study a communication model where processes reveal their entire local information whenever they interact. However, the system involves an indeterminate environment that may control when a communication event occurs and which participants are involved. As a result, the amount of information a process may receive at once is unbounded. Such full-information protocols are common in the distributed-computing literature. Here, we consider synchronous systems, modelled as infinite games with imperfect information played on finite graphs. We present a decision procedure for the synthesis of a process with an ω-regular specification in a system where the other participating processes are fixed. The challenge lies in constructing a finite representation of information trees with unbounded branching. Our construction is non-elementary in the size of the problem instance, and we establish a matching non-elementary lower bound for the complexity of the synthesis problem.

Cite as

Dietmar Berwanger, Laurent Doyen, and Thomas Soullard. Synthesising Full-Information Protocols. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 13:1-13:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{berwanger_et_al:LIPIcs.FSTTCS.2025.13,
  author =	{Berwanger, Dietmar and Doyen, Laurent and Soullard, Thomas},
  title =	{{Synthesising Full-Information Protocols}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{13:1--13:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.13},
  URN =		{urn:nbn:de:0030-drops-250930},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.13},
  annote =	{Keywords: Infinite Games on Finite Graphs, Imperfect Information, Reactive Processes, Distributed Synthesis, Dynamic Networks}
}
Document
Regulating Synchronous Data Exchange to Meet Control Flow and Data Specifications

Authors: Ashwin Bhaskar and M. Praveen


Abstract
When multiple software components interact via method calls, we may want to ensure that the order of invoked methods and the arguments provided adhere to some specification. The classic problem associated with interface automata checks for the existence of a mediator whose intention is to act as a buffer in between method invocations so that invocations do not go unanswered. We extend the base model underlying interface automata, enabling them to exchange integer values - one automaton generates an integer value and outputs it by firing a generating transition and another automaton receives the value by synchronously firing a receiving transition. Transitions in the automata can have guards with linear order constraints on the exchanged values, influencing which methods can or can not be invoked later. So the generated values influence the sequences of invocations that are enabled. We specify desirable properties of the sequence of method calls and the arguments passed to them using an extension of Linear Temporal Logic (LTL). We consider the interoperability problem, which is to check if it is possible to generate integer values in such a way that all enabled sequences satisfy the given specification. We show that the interoperability problem is undecidable in general, even when there are only two participating automata. We show decidability in the case where guards on generating transitions can only have equality constraints on the exchanged value (but receiving transitions can continue to have linear order constraints). We model this problem as a game between two players, one trying to generate integer values such that violating sequences are disabled while the other player tries to dig out violating sequences that are enabled. Interoperability is equivalent to the first player having a winning strategy. We solve this game via a finite abstraction, which results in a symbolic game. We then show that winning strategies for the symbolic game can be translated to winning strategies for the original game over integers.

Cite as

Ashwin Bhaskar and M. Praveen. Regulating Synchronous Data Exchange to Meet Control Flow and Data Specifications. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 14:1-14:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bhaskar_et_al:LIPIcs.FSTTCS.2025.14,
  author =	{Bhaskar, Ashwin and Praveen, M.},
  title =	{{Regulating Synchronous Data Exchange to Meet Control Flow and Data Specifications}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{14:1--14:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.14},
  URN =		{urn:nbn:de:0030-drops-250962},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.14},
  annote =	{Keywords: Distributed Systems, Interface Automata, Registers, Parity Games}
}
Document
Extending EFX Allocations to Further Multi-Graph Classes

Authors: Umang Bhaskar and Yeshwant Pandit


Abstract
The existence of EFX allocations is one of the most significant open questions in fair division. Recent work by Christodoulou, Fiat, Koutsoupias, and Sgouritsa ("Fair allocation in graphs," EC 2023) establishes the existence of EFX allocations for graphical valuations, when agents are vertices in a graph, items are edges, and each item has zero value for all agents other than those at its endpoints. Thus, in this setting, each good has non-zero value for at most two agents, and there is at most one good valued by any pair of agents. This marks one of the few cases when an exact and complete EFX allocation is known to exist for more than three agents. In this work, we partially extend these results to multi-graphs, when each pair of vertices can have more than one edge between them. The existence of EFX allocations in multi-graphs is a natural open question given their existence in simple graphs. We show that EFX allocations exist, and can be computed in polynomial time, for agents with cancelable valuations in the following cases: (i) bipartite multi-graphs, (ii) multi-trees with monotone valuations, and (iii) multi-graphs with girth (2t-1), where t is the chromatic number of the multi-graph. The existence of EFX in cycle multi-graphs follows from (i), (iii), and the known existence of EFX for three agents.

Cite as

Umang Bhaskar and Yeshwant Pandit. Extending EFX Allocations to Further Multi-Graph Classes. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 15:1-15:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bhaskar_et_al:LIPIcs.FSTTCS.2025.15,
  author =	{Bhaskar, Umang and Pandit, Yeshwant},
  title =	{{Extending EFX Allocations to Further Multi-Graph Classes}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{15:1--15:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.15},
  URN =		{urn:nbn:de:0030-drops-250958},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.15},
  annote =	{Keywords: Fair Division, EFX, Multi-graphs}
}
Document
The Agafonov and Schnorr-Stimm Theorems for Probabilistic Automata

Authors: Laurent Bienvenu, Hugo Gimbert, and Subin Pulari


Abstract
For a fixed alphabet A, an infinite sequence X is said to be normal if every word w over A appears in X with the same frequency as any other word of the same length. A classical result of Agafonov (1966) relates normality to finite automata as follows: a sequence X is normal if and only if any subsequence of X selected by a finite automaton is itself normal. Another theorem of Schnorr and Stimm (1972) gives an alternative characterization: a sequence X is normal if and only if no gambler can win large amounts of money by betting on the sequence X using a strategy that can be described by a finite automaton. Both of these theorems are established in the setting of deterministic finite automata. This raises the question as to whether they can be extended to the setting of probabilistic finite automata. In the case of the Agafonov theorem, a partial positive answer was given by Léchine et al. (MFCS 2024) in a restricted case of probabilistic automata with rational transition probabilities. In this paper, we settle the full conjecture by proving that both the Agafonov and the Schnorr-Stimm theorems hold true for arbitrary probabilistic automata. Specifically, we show that a sequence X is normal if and only if any probabilistic automaton selects a normal subsequence of X with probability 1 and also show that a sequence X is normal if and only if any probabilistic finite-state gambler fails to win on X with probability 1.

Cite as

Laurent Bienvenu, Hugo Gimbert, and Subin Pulari. The Agafonov and Schnorr-Stimm Theorems for Probabilistic Automata. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 16:1-16:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bienvenu_et_al:LIPIcs.FSTTCS.2025.16,
  author =	{Bienvenu, Laurent and Gimbert, Hugo and Pulari, Subin},
  title =	{{The Agafonov and Schnorr-Stimm Theorems for Probabilistic Automata}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{16:1--16:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.16},
  URN =		{urn:nbn:de:0030-drops-250978},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.16},
  annote =	{Keywords: Normality, Agafonov theorem, probabilistic automata}
}
Document
Cutoff for the Swendsen–Wang Dynamics on the Complete Graph

Authors: Antonio Blanca and Zhezheng Song


Abstract
We study the speed of convergence of the Swendsen-Wang (SW) dynamics for the q-state ferromagnetic Potts model on the n-vertex complete graph, known as the mean-field model. The SW dynamics was introduced as an attractive alternative to the local Glauber dynamics, often offering faster convergence rates to stationarity in a variety of settings. A series of works have characterized the asymptotic behavior of the speed of convergence of the mean-field SW dynamics for all q ≥ 2 and all values of the inverse temperature parameter β > 0. In particular, it is known that when β > q the mixing time of the SW dynamics is Θ(log n). We strengthen this result by showing that for all β > q, there exists a constant c(β,q) > 0 such that the mixing time of the SW dynamics is c(β,q) log n + Θ(1). This implies that the mean-field SW dynamics exhibits the cutoff phenomenon in this temperature regime, demonstrating that this Markov chain undergoes a sharp transition from "far from stationarity" to "well-mixed" within a narrow Θ(1) time window. The presence of cutoff is algorithmically significant, as simulating the chain for fewer steps than its mixing time could lead to highly biased samples.

Cite as

Antonio Blanca and Zhezheng Song. Cutoff for the Swendsen–Wang Dynamics on the Complete Graph. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 17:1-17:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{blanca_et_al:LIPIcs.FSTTCS.2025.17,
  author =	{Blanca, Antonio and Song, Zhezheng},
  title =	{{Cutoff for the Swendsen–Wang Dynamics on the Complete Graph}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{17:1--17:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.17},
  URN =		{urn:nbn:de:0030-drops-250987},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.17},
  annote =	{Keywords: Markov chains, mixing times, cutoff phenomenon, Potts model, mean-field}
}
Document
Iterating Non-Aggregative Structure Compositions

Authors: Marius Bozga, Radu Iosif, and Florian Zuleger


Abstract
An aggregative composition is a binary operation obeying the principle that the whole is determined by the sum of its parts. The development of graph algebras, on which the theory of formal graph languages is built, relies on aggregative compositions that behave like disjoint union, except for a set of well-marked interface vertices from both sides, that are joined. The same style of composition has been considered in the context of relational structures, that generalize graphs and use constant symbols to label the interface. In this paper, we study a non-aggregative composition operation, called fusion, that joins non-deterministically chosen elements from disjoint structures. The sets of structures obtained by iteratively applying fusion do not always have bounded tree-width, even when starting from a tree-width bounded set. First, we prove that the problem of the existence of a bound on the tree-width of the closure of a given set under fusion is decidable, when the input set is described inductively by a finite hyperedge-replacement (HR) grammar, written using the operations of aggregative composition, forgetting and renaming of constants. Such sets are usually called context-free. Second, assuming that the closure under fusion of a context-free set has bounded tree-width, we show that it is the language of an effectively constructible HR grammar. A possible application of the latter result is the possiblity of checking whether all structures from a non-aggregatively closed set having bounded tree-width satisfy a given monadic second order logic formula.

Cite as

Marius Bozga, Radu Iosif, and Florian Zuleger. Iterating Non-Aggregative Structure Compositions. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bozga_et_al:LIPIcs.FSTTCS.2025.18,
  author =	{Bozga, Marius and Iosif, Radu and Zuleger, Florian},
  title =	{{Iterating Non-Aggregative Structure Compositions}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.18},
  URN =		{urn:nbn:de:0030-drops-250997},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.18},
  annote =	{Keywords: Hyperedge replacement, Tree-width}
}
Document
Clustering in Varying Metrics

Authors: Deeparnab Chakrabarty, Jonathan Conroy, and Ankita Sarkar


Abstract
We introduce the aggregated clustering problem, where one is given T instances of a center-based clustering task over the same n points, but under different metrics. The goal is to open k centers to minimize an aggregate of the clustering costs - e.g., the average or maximum - where the cost is measured via k-center/median/means objectives. More generally, we minimize a norm Ψ over the T cost values. We show that for T ≥ 3, the problem is inapproximable to any finite factor in polynomial time. For T = 2, we give constant-factor approximations. We also show W[2]-hardness when parameterized by k, but obtain f(k,T)poly(n)-time 3-approximations when parameterized by both k and T. When the metrics have structure, we obtain efficient parameterized approximation schemes (EPAS). If all T metrics have bounded ε-scatter dimension, we achieve a (1+ε)-approximation in f(k,T,ε)poly(n) time. If the metrics are induced by edge weights on a common graph G of bounded treewidth tw, and Ψ is the sum function, we get an EPAS in f(T,ε,tw)poly(n,k) time. Conversely, unless (randomized) ETH is false, any finite factor approximation is impossible if parametrized by only T, even when the treewidth is tw = Ω(polylog n).

Cite as

Deeparnab Chakrabarty, Jonathan Conroy, and Ankita Sarkar. Clustering in Varying Metrics. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 19:1-19:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chakrabarty_et_al:LIPIcs.FSTTCS.2025.19,
  author =	{Chakrabarty, Deeparnab and Conroy, Jonathan and Sarkar, Ankita},
  title =	{{Clustering in Varying Metrics}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{19:1--19:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.19},
  URN =		{urn:nbn:de:0030-drops-251007},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.19},
  annote =	{Keywords: Clustering, approximation algorithms, LP rounding, parameterized and exact algorithms, dynamic programming, fixed parameter tractability, hardness of approximation}
}
Document
Flavors of Quantifiers in Hyperlogics

Authors: Marek Chalupa, Thomas A. Henzinger, and Ana Oliveira da Costa


Abstract
Hypertrace logic is a sorted first-order logic with separate sorts for time and execution traces. Its formulas specify hyperproperties, which are properties relating multiple traces. In this work, we extend hypertrace logic by introducing trace quantifiers that range over the set of all possible traces. In this extended logic, formulas can quantify over two kinds of trace variables: constrained trace variables, which range over a fixed set of traces defined by the model, and unconstrained trace variables, which can be assigned to any trace. In comparison, hyperlogics such as HyperLTL have only constrained trace quantifiers. We use hypertrace logic to study how different quantifier patterns affect the decidability of the satisfiability problem. We prove that hypertrace logic without constrained trace quantifiers is equivalent to monadic second-order logic of one successor (S1S), and therefore satisfiable, and that the trace-prefixed fragment (all trace quantifiers precede all time quantifiers) is equivalent to HyperQPTL. Moreover, we show that all hypertrace formulas where the only alternation between constrained trace quantifiers is from an existential to a universal quantifier are equisatisfiable to formulas without constraints on their trace variables and, therefore, decidable as well. Our framework allows us to study also time-prefixed hyperlogics, for which we provide new decidability and undecidability results.

Cite as

Marek Chalupa, Thomas A. Henzinger, and Ana Oliveira da Costa. Flavors of Quantifiers in Hyperlogics. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 20:1-20:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chalupa_et_al:LIPIcs.FSTTCS.2025.20,
  author =	{Chalupa, Marek and Henzinger, Thomas A. and Oliveira da Costa, Ana},
  title =	{{Flavors of Quantifiers in Hyperlogics}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{20:1--20:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.20},
  URN =		{urn:nbn:de:0030-drops-251016},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.20},
  annote =	{Keywords: Hyperproperties, Satisfiability, First-order Logic, S1S}
}
Document
Overlay Network Construction: Improved Overall and Node-Wise Message Complexity

Authors: Yi-Jun Chang, Yanyu Chen, and Gopinath Mishra


Abstract
We consider the problem of constructing distributed overlay networks, where nodes in a reconfigurable system can create or sever connections with nodes whose identifiers they know. Initially, each node knows only its own and its neighbors' identifiers, forming a local channel, while the evolving structure is termed the global channel. The goal is to reconfigure any connected graph into a desired topology, such as a bounded-degree expander graph or a well-formed tree (WFT) with a constant maximum degree and logarithmic diameter, minimizing the total number of rounds and message complexity. This problem mirrors real-world peer-to-peer network construction, where creating robust and efficient systems is desired. We study the overlay reconstruction problem in a network of n nodes in two models: GOSSIP-reply and HYBRID. In the GOSSIP-reply model, each node can send a message and receive a corresponding reply message in one round. In the HYBRID model, a node can send O(1) messages to each neighbor in the local channel and a total of O(log n) messages in the global channel. In both models, we propose protocols for WFT construction with O (n log n) message complexities using messages of O(log n) bits. In the GOSSIP-reply model, our protocol takes O(log n) rounds while in the HYBRID model, our protocol takes O(log² n) rounds. Both protocols use O (n log² n) bits of communication. We obtain improved bounds over prior work: GOSSIP-reply: A recent result by Dufoulon et al. (ITCS 2024) achieved O(log⁵ n) round complexity and O (n log⁵ n) message complexity using messages of at least Ω(log² n) bits in GOSSIP-reply. With messages of size O(log n), our protocol achieves an optimal round complexity of O(log n) and an improved message complexity of O(n log n). HYBRID: Götte et al. (Distributed Computing 2023) showed an optimal O(log n)-round algorithm with O(log² n) global messages per round which incurs a message complexity of Ω(m), where m is the number of edges in the initial topology. At the cost of increasing the round complexity to O(log² n) while using only O(log n) messages globally, our protocol achieves a message complexity that is independent of m. Our approach ensures that the total number of messages for node v, with degree deg(v) in the initial topology, is bounded by O(deg(v) + log n), while the algorithm of Götte et al. requires O(deg(v) + (log⁴ n)/(log log n)) messages per node.

Cite as

Yi-Jun Chang, Yanyu Chen, and Gopinath Mishra. Overlay Network Construction: Improved Overall and Node-Wise Message Complexity. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 21:1-21:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chang_et_al:LIPIcs.FSTTCS.2025.21,
  author =	{Chang, Yi-Jun and Chen, Yanyu and Mishra, Gopinath},
  title =	{{Overlay Network Construction: Improved Overall and Node-Wise Message Complexity}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{21:1--21:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.21},
  URN =		{urn:nbn:de:0030-drops-251025},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.21},
  annote =	{Keywords: Distributed algorithms, Overlay networks, Expander graphs}
}
Document
IPS Lower Bounds for Formulas and Sum of ROABPs

Authors: Prerona Chatterjee, Utsab Ghosal, Partha Mukhopadhyay, and Amit Sinhababu


Abstract
We give new lower bounds for the fragments of the Ideal Proof System (IPS) introduced by Grochow and Pitassi [Joshua A. Grochow and Toniann Pitassi, 2018]. The Ideal Proof System is a central topic in algebraic proof complexity developed in the context of Nullstellensatz refutation [Paul Beame et al., 1994] and simulates Extended Frege efficiently. Our main results are as follows. - mult-IPS_{Lin'}: We prove nearly quadratic-size formula lower bound for multilinear refutation (over the Boolean hypercube) of a variant of the subset-sum axiom polynomial. Extending this, we obtain a nearly matching qualitative statement for a constant degree target polynomial. - IPS_{Lin'}: Over the fields of characteristic zero, we prove exponential-size sum-of-ROABPs lower bound for the refutation of a variant of the subset-sum axiom polynomial. The result also extends over the fields of positive characteristics when the target polynomial is suitably modified. The modification is inspired by the recent results [Tuomas Hakoniemi et al., 2024; Amik Raj Behera et al., 2025]. The mult-IPS_{Lin'} lower bound result is obtained by combining the quadratic-size formula lower bound technique of Kalorkoti [Kalorkoti, 1985] with some additional ideas. The proof technique of IPS_{Lin'} lower bound result is inspired by the recent lower bound result of Chatterjee, Kush, Saraf and Shpilka [Prerona Chatterjee et al., 2024].

Cite as

Prerona Chatterjee, Utsab Ghosal, Partha Mukhopadhyay, and Amit Sinhababu. IPS Lower Bounds for Formulas and Sum of ROABPs. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 22:1-22:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chatterjee_et_al:LIPIcs.FSTTCS.2025.22,
  author =	{Chatterjee, Prerona and Ghosal, Utsab and Mukhopadhyay, Partha and Sinhababu, Amit},
  title =	{{IPS Lower Bounds for Formulas and Sum of ROABPs}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{22:1--22:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.22},
  URN =		{urn:nbn:de:0030-drops-251035},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.22},
  annote =	{Keywords: Ideal Proof System, Lower Bound, Algebraic Complexity}
}
Document
Parallel Complexity of Depth-First-Search and Maximal Path in Restricted Graph Classes

Authors: Archit Chauhan, Samir Datta, and M. Praveen


Abstract
Constructing a Depth First Search (DFS) tree is a fundamental graph problem, whose parallel complexity is still not settled. Reif showed parallel intractability of lex-first DFS. In contrast, randomized parallel algorithms (and more recently, deterministic quasipolynomial parallel algorithms) are known for constructing a DFS tree in general (di)graphs. However a deterministic parallel algorithm for DFS in general graphs remains an elusive goal. Working towards this, a series of works gave deterministic NC algorithms for DFS in planar graphs and digraphs. We further extend these results to more general graph classes, by providing NC algorithms for (di)graphs of bounded genus, and for undirected H-minor-free graphs where H is a fixed graph with at most one crossing. For the case of (di)graphs of bounded treewidth, we further improve the complexity to a Logspace bound. Constructing a maximal path is a simpler problem (that reduces to DFS) for which no deterministic parallel bounds are known for general graphs. For planar graphs a bound of O(log n) parallel time on a CRCW PRAM (thus in NC²) is known. We improve this bound to Logspace.

Cite as

Archit Chauhan, Samir Datta, and M. Praveen. Parallel Complexity of Depth-First-Search and Maximal Path in Restricted Graph Classes. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 23:1-23:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chauhan_et_al:LIPIcs.FSTTCS.2025.23,
  author =	{Chauhan, Archit and Datta, Samir and Praveen, M.},
  title =	{{Parallel Complexity of Depth-First-Search and Maximal Path in Restricted Graph Classes}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{23:1--23:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.23},
  URN =		{urn:nbn:de:0030-drops-251041},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.23},
  annote =	{Keywords: Parallel Complexity, Graph Algorithms, Depth First Search, Maximal Path, Planar Graphs, Minor-Free, Treewidth, Logspace}
}
Document
Languages of Words of Low Automatic Complexity Are Hard to Compute

Authors: Joey Chen, Bjørn Kjos-Hanssen, Ivan Koswara, Linus Richter, and Frank Stephan


Abstract
The automatic complexity of a finite word (string) is an analogue for finite automata of Sipser’s distinguishing complexity (1983) and was introduced by Shallit and Wang (2001). For a finite alphabet Σ of at least two elements, we consider the non-deterministic automatic complexity given by exactly - yet not necessarily uniquely - accepting automata: a word x ∈ Σ^* has exact non-deterministic automatic complexity k ∈ ℕ if there exists a non-deterministic automaton of k states which accepts x while rejecting every other word of the same length as x, and no automaton of fewer states has this property. Importantly, and in contrast to the classical notion, the witnessing automaton may have multiple paths of computation accepting x. We denote this measure of complexity by A_{Ne}, and study a class of languages of low A_{Ne}-complexity defined as L_q = {x ∈ Σ^* : A_{Ne}(x) < q|x|}, which is parameterised by rationals q ∈ (0,1/2) (generalising a class of sets first studied by Kjos-Hanssen). We show that for every q ∈ (0,1/2), this class is neither context-free nor recognisable by certain Boolean circuits. In the process, we answer an open question of Kjos-Hanssen quantifying the complexity of L_{1/3} in terms of Boolean circuits, and also prove the Shannon effect for A_{Ne}.

Cite as

Joey Chen, Bjørn Kjos-Hanssen, Ivan Koswara, Linus Richter, and Frank Stephan. Languages of Words of Low Automatic Complexity Are Hard to Compute. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 24:1-24:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.FSTTCS.2025.24,
  author =	{Chen, Joey and Kjos-Hanssen, Bj{\o}rn and Koswara, Ivan and Richter, Linus and Stephan, Frank},
  title =	{{Languages of Words of Low Automatic Complexity Are Hard to Compute}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{24:1--24:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.24},
  URN =		{urn:nbn:de:0030-drops-251055},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.24},
  annote =	{Keywords: Automatic complexity, automata theory, formal languages, Boolean circuits, Shannon effect}
}
Document
On the Interplay of Cube Learning and Dependency Schemes in {QCDCL} Proof Systems

Authors: Abhimanyu Choudhury and Meena Mahajan


Abstract
Quantified Conflict Driven Clause Leaning (QCDCL) is one of the main approaches to solving Quantified Boolean Formulas (QBF). Cube-learning is employed in this approach to ensure that true formulas can be verified. Dependency Schemes help to detect spurious dependencies that are implied by the variable ordering in the quantifier prefix of QBFs but are not essential for constructing (counter)models. This detection can provably shorten refutations in specific proof systems, and is expected to speed up runs of QBF solvers. The simplest underlying proof system [BeyersdorffBöhm-LMCS2023], formalises the reasoning in the QCDCL approach on false formulas, when neither cube-learning nor dependency schemes is used. The work of [BöhmPeitlBeyersdorff-AI2024] further incorporates cube-learning. The work of [ChoudhuryMahajan-JAR2024] incorporates a limited use of dependency schemes, but without cube-learning. In this work, proof systems underlying the reasoning of QCDCL solvers which use cube learning, and which use dependency schemes at all stages, are formalised. Sufficient conditions for soundness and completeness are presented, and it is shown that using the standard and reflexive resolution path dependency schemes (𝙳^{std} and 𝙳^{rrs}) to relax the decision order provably shortens refutations. When the decisions are restricted to follow quantification order, but dependency schemes are used in propagation and learning, in conjunction with cube-learning, the resulting proof systems using the dependency schemes 𝙳^{std} and 𝙳^{rrs} are investigated in detail and their relative strengths are analysed.

Cite as

Abhimanyu Choudhury and Meena Mahajan. On the Interplay of Cube Learning and Dependency Schemes in {QCDCL} Proof Systems. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 25:1-25:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{choudhury_et_al:LIPIcs.FSTTCS.2025.25,
  author =	{Choudhury, Abhimanyu and Mahajan, Meena},
  title =	{{On the Interplay of Cube Learning and Dependency Schemes in \{QCDCL\} Proof Systems}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{25:1--25:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.25},
  URN =		{urn:nbn:de:0030-drops-251062},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.25},
  annote =	{Keywords: QBF, CDCL, Resolution, Dependency schemes}
}
Document
Characterizing NC¹ with Typed Monoids

Authors: Anuj Dawar and Aidan T. Evans


Abstract
Krebs et al. (2007) gave a characterization of the complexity class TC⁰ as the class of languages recognized by a certain class of typed monoids. The notion of typed monoid was introduced to extend methods of algebraic automata theory to infinite monoids and hence characterize classes beyond the regular languages. We advance this line of work beyond TC⁰ by giving a characterization of NC¹. This is obtained by first showing that NC¹ can be defined as the languages expressible in an extension of first-order logic using only unary quantifiers over regular languages. The expressibility result is a consequence of a general result showing that finite monoid multiplication quantifiers of higher dimension can be replaced with unary quantifiers in the context of interpretations over strings, which also answers a question of Lautemann et al. (2001). We estblish this collapse result for a much more general class of interpretations using results on interpretations due to Bojańczyk et al. (2019), which may be of independent interest.

Cite as

Anuj Dawar and Aidan T. Evans. Characterizing NC¹ with Typed Monoids. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 26:1-26:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dawar_et_al:LIPIcs.FSTTCS.2025.26,
  author =	{Dawar, Anuj and Evans, Aidan T.},
  title =	{{Characterizing NC¹ with Typed Monoids}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{26:1--26:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.26},
  URN =		{urn:nbn:de:0030-drops-251070},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.26},
  annote =	{Keywords: algebraic automata theory, circuit complexity, descriptive complexity, typed monoids, semigroups, generalized quantifiers}
}
Document
Fault-Tolerant Approximate Distance Oracles with a Source Set

Authors: Dipan Dey and Telikepalli Kavitha


Abstract
Our input is an undirected weighted graph G = (V,E) on n vertices along with a source set S ⊆ V. The problem is to preprocess G and build a compact data structure such that upon query Qu(s,v,f) where (s,v) ∈ S×V and f is any faulty edge, we can quickly find a good estimate (i.e., within a small multiplicative stretch) of the s-v distance in G-f. We use a fault-tolerant ST-distance oracle from the work of Bilò et al. (STACS 2018) to construct an S×V approximate distance oracle or sourcewise approximate distance oracle of size Õ(|S|n + n^{3/2}) with multiplicative stretch at most 5. We construct another fault-tolerant sourcewise approximate distance oracle of size Õ(|S|n + n^{4/3}) with multiplicative stretch at most 13. Both the oracles have O(1) query answering time.

Cite as

Dipan Dey and Telikepalli Kavitha. Fault-Tolerant Approximate Distance Oracles with a Source Set. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 27:1-27:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dey_et_al:LIPIcs.FSTTCS.2025.27,
  author =	{Dey, Dipan and Kavitha, Telikepalli},
  title =	{{Fault-Tolerant Approximate Distance Oracles with a Source Set}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{27:1--27:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.27},
  URN =		{urn:nbn:de:0030-drops-251081},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.27},
  annote =	{Keywords: Weighted graphs, approximate distances, fault-tolerant data structures}
}
Document
Beyond Monads and Biproducts: A Uniform Interpretation of Parallelism in Intuitionistic Logic

Authors: Alejandro Díaz-Caro and Octavio Malherbe


Abstract
Traditional approaches to modelling parallelism and algebraic structure in lambda calculi often rely on monads - as in Moggi’s framework - or on rich categorical structures such as biproducts - as used in certain models of linear logic. In this work, we propose a minimal alternative that captures both parallelism and weighted parallelism (linear combinations) within the setting of intuitionistic propositional logic, without resorting to monads or assuming the existence of biproducts. We introduce two lambda calculi: a parallel lambda calculus and an algebraic lambda calculus, both extending full propositional intuitionistic logic. Their semantics are given in two categories: Mag_{Set}, whose objects are magmas and arrows are functions in Set; and AMag^{𝒮}_{Set}, whose objects are action magmas. The key technical challenge addressed is the interpretation of disjunction in the presence of parallel and algebraic operators. Since the usual coproduct structure is unavailable in our minimal setting, we propose a novel set-theoretic interpretation based on the union of the disjoint union and the Cartesian product. This allows for the construction of sound and adequate models for both calculi. Our results offer a unified and structurally lightweight framework for modelling parallelism and algebraic effects in intuitionistic logic, opening the way to alternatives beyond the traditional monadic or linear logic approaches.

Cite as

Alejandro Díaz-Caro and Octavio Malherbe. Beyond Monads and Biproducts: A Uniform Interpretation of Parallelism in Intuitionistic Logic. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 28:1-28:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{diazcaro_et_al:LIPIcs.FSTTCS.2025.28,
  author =	{D{\'\i}az-Caro, Alejandro and Malherbe, Octavio},
  title =	{{Beyond Monads and Biproducts: A Uniform Interpretation of Parallelism in Intuitionistic Logic}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{28:1--28:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.28},
  URN =		{urn:nbn:de:0030-drops-251098},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.28},
  annote =	{Keywords: Algebraic lambda calculus, Categorical semantics, Disjunction, Proof theory}
}
Document
Beyond Exact Fairness: Envy-Free Incomplete Connected Fair Division

Authors: Ajaykrishnan E S and Daniel Lokshtanov


Abstract
We study the problem of Envy-Free Incomplete Connected Fair Division, where exactly p vertices of an undirected graph must be allocated to agents such that each agent receives a connected share and does not envy another agent’s share. Focusing on agents with additive valuations, we show that the problem remains computationally hard when parameterized by p and the number of agents. This result holds even for star graphs and with the input numbers given in unary representation, thereby resolving an open problem posed by Gahlawat and Zehavi (FSTTCS 2023). In stark contrast, we show that if one is willing to tolerate even the slightest amount of envy, then the problem becomes efficient with respect to the natural parameters. Specifically, we design an Efficient Parameterized Approximation Scheme parameterized by p and the number of agent types. Our algorithm works on general graphs and remains efficient even when the input numbers are provided in binary representation.

Cite as

Ajaykrishnan E S and Daniel Lokshtanov. Beyond Exact Fairness: Envy-Free Incomplete Connected Fair Division. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 29:1-29:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{es_et_al:LIPIcs.FSTTCS.2025.29,
  author =	{E S, Ajaykrishnan and Lokshtanov, Daniel},
  title =	{{Beyond Exact Fairness: Envy-Free Incomplete Connected Fair Division}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{29:1--29:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.29},
  URN =		{urn:nbn:de:0030-drops-251101},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.29},
  annote =	{Keywords: Envy-Free Incomplete Connected Fair Division, Efficient Parameterized Approximation Scheme, W\lbrack1\rbrack-hardness}
}
Document
Approximating Optimal Broadcast of Files in a Hose-Model Network

Authors: Thomas Erlebach, Naveen Garg, Sukriti Gupta, and Amitabh Trehan


Abstract
The paper considers the problem of file sharing among peers who are connected to a common core network through links of differing upload and download capacities, as is the case in networks provisioned according to the hose model. The file is assumed to be divided into equal-sized chunks, and a peer can start sending a "chunk" of the file to another peer only after it has received the entire chunk. The objective is to share a chunk, initially residing on one of the peers, with all other peers in the least time possible. Peers can simultaneously send/receive parts of a chunk to/from multiple peers, subject to the upload and download capacity constraints. We only consider the problem of broadcasting one chunk to all peers. We consider two different models - in the migratory model, a peer can receive the chunk from multiple peers, while in the non-migratory model, any peer can receive the chunk only from one peer. For the migratory model, introduced in this paper, we show a novel integer program and use the optimum solution to the LP-relaxation to give a schedule with makespan e^{1/e} OPT+P where P is the time required by the slowest peer to download the chunk. Minimising makespan in the non-migratory model is known to be NP-hard. We give a solution with makespan 18OPT+P and this is the first approximation algorithm for heterogeneous and asymmetric upload/download capacities. We also consider 2 special cases. For uniform download capacities, we obtain a solution with makespan 2OPT extending a result due to Liu [Pangfeng Liu, 2002]. For uniform upload capacities, we give the first approximation algorithm, producing makespan at most 2OPT+2P.

Cite as

Thomas Erlebach, Naveen Garg, Sukriti Gupta, and Amitabh Trehan. Approximating Optimal Broadcast of Files in a Hose-Model Network. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 30:1-30:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{erlebach_et_al:LIPIcs.FSTTCS.2025.30,
  author =	{Erlebach, Thomas and Garg, Naveen and Gupta, Sukriti and Trehan, Amitabh},
  title =	{{Approximating Optimal Broadcast of Files in a Hose-Model Network}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{30:1--30:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.30},
  URN =		{urn:nbn:de:0030-drops-251118},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.30},
  annote =	{Keywords: File sharing, scheduling, peer-to-peer networks}
}
Document
Token Sliding Independent Set Reconfiguration on Block Graphs

Authors: Mathew C. Francis and Veena Prabhakaran


Abstract
Let S be an independent set of a simple undirected graph G. Suppose that each vertex of S has a token placed on it. The tokens are allowed to be moved, one at a time, by sliding along the edges of G while maintaining the property that after each move, the vertices having tokens always form an independent set of G. We would like to determine whether the tokens can be eventually brought to stay on the vertices of another independent set S' of G in this manner. In other words, we would like to decide if we can transform S into S' through a sequence of steps, each of which involves substituting a vertex in the current independent set with one of its neighbours to obtain another independent set. This problem of determining if one independent set of a graph "is reachable" from another independent set of it is known to be PSPACE-hard even for split graphs, planar graphs, and graphs of bounded treewidth. Polynomial time algorithms have been obtained for certain graph classes like trees, interval graphs, claw-free graphs, and bipartite permutation graphs. We present a polynomial time algorithm for the problem on block graphs, which are the graphs in which every maximal 2-connected subgraph is a clique. Our algorithm is the first generalization of the known polynomial time algorithm for trees to a larger class of graphs.

Cite as

Mathew C. Francis and Veena Prabhakaran. Token Sliding Independent Set Reconfiguration on Block Graphs. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 31:1-31:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{francis_et_al:LIPIcs.FSTTCS.2025.31,
  author =	{Francis, Mathew C. and Prabhakaran, Veena},
  title =	{{Token Sliding Independent Set Reconfiguration on Block Graphs}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{31:1--31:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.31},
  URN =		{urn:nbn:de:0030-drops-251120},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.31},
  annote =	{Keywords: Token sliding independent set reconfiguration, block graphs, polynomial time algorithm}
}
Document
Fair Rent Division: New Budget and Rent Constraints

Authors: Rohith Reddy Gangam, Shayan Taherijam, and Vijay V. Vazirani


Abstract
We study the classical rent division problem, where n agents must allocate n indivisible rooms and split a fixed total rent R. The goal is to compute an envy-free (EF) allocation, where no agent prefers another agent’s room and rent to their own. This problem has been extensively studied under standard assumptions, where efficient algorithms for computing EF allocations are known. We extend this framework by introducing two practically motivated constraints: (i) lower and upper bounds on room rents, and (ii) room-specific budget for agents. We develop efficient combinatorial algorithms that either compute a feasible EF allocation or certify infeasibility. We further design algorithms to optimize over EF allocations using natural fairness objectives such as maximin utility, leximin utility, and minimum utility spread. Our approach unifies both constraint types within a single algorithmic framework, advancing the applicability of fair division methods in real-world platforms such as Spliddit.

Cite as

Rohith Reddy Gangam, Shayan Taherijam, and Vijay V. Vazirani. Fair Rent Division: New Budget and Rent Constraints. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 32:1-32:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gangam_et_al:LIPIcs.FSTTCS.2025.32,
  author =	{Gangam, Rohith Reddy and Taherijam, Shayan and Vazirani, Vijay V.},
  title =	{{Fair Rent Division: New Budget and Rent Constraints}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{32:1--32:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.32},
  URN =		{urn:nbn:de:0030-drops-251136},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.32},
  annote =	{Keywords: Rent Division, Envy‑Free, Fair Division}
}
Document
A Zone-Based Algorithm for Timed Parity Games

Authors: Gilles Geeraerts, Frédéric Herbreteau, Jean-François Raskin, and Alexis Reynouard


Abstract
This paper revisits timed games by building upon the semantics introduced in "The Element of Surprise in Timed Games" [Luca de Alfaro et al., 2003]. We introduce some modifications to this semantics for two primary reasons: firstly, we recognize instances where the original semantics appears counterintuitive in the context of controller synthesis; secondly, we present methods to develop efficient zone-based algorithms. Our algorithm successfully addresses timed parity games, and we have implemented it using UPPAAL’s zone library. This prototype effectively demonstrates the feasibility of a zone-based algorithm for parity objectives and a rich semantics for timed interactions between the players.

Cite as

Gilles Geeraerts, Frédéric Herbreteau, Jean-François Raskin, and Alexis Reynouard. A Zone-Based Algorithm for Timed Parity Games. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 33:1-33:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{geeraerts_et_al:LIPIcs.FSTTCS.2025.33,
  author =	{Geeraerts, Gilles and Herbreteau, Fr\'{e}d\'{e}ric and Raskin, Jean-Fran\c{c}ois and Reynouard, Alexis},
  title =	{{A Zone-Based Algorithm for Timed Parity Games}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{33:1--33:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.33},
  URN =		{urn:nbn:de:0030-drops-251140},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.33},
  annote =	{Keywords: Timed Parity Games, Realtime Controller Synthesis}
}
Document
On the Hardness of Approximating Distances of Quantum Codes

Authors: Elena Grigorescu, Vatsal Jha, and Eric Samperton


Abstract
The problem of computing distances of error-correcting codes is fundamental in both the classical and quantum settings. While hardness for the classical version of these problems has been known for some time (in both the exact and approximate settings), it was only recently that Kapshikar and Kundu showed these problems are also hard in the quantum setting. As our first main result, we reprove this using arguably simpler arguments based on hypergraph product codes. In particular, we get a direct reduction to CSS codes, the most commonly used type of quantum code, from the minimum distance problem for classical linear codes. Our second set of results considers the distance of a graph state, which is a key parameter for quantum codes obtained via the codeword stabilized formalism. We show that it is NP-hard to compute/approximate the distance of a graph state when the adjacency matrix of the graph is the input. In fact, we show this is true even if we only consider X-type errors of a graph state. Our techniques moreover imply an interesting classical consequence: the hardness of computing or approximating the distance of classical codes with rate equal to 1/2. One of the main motivations of the present work is a question raised by Kapshikar and Kundu concerning the NP-hardness of approximation when there is an additive error proportional to a quantum code’s length. We show that no such hardness can hold for hypergraph product codes. These observations suggest the possibility of a new kind of square root barrier.

Cite as

Elena Grigorescu, Vatsal Jha, and Eric Samperton. On the Hardness of Approximating Distances of Quantum Codes. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 34:1-34:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{grigorescu_et_al:LIPIcs.FSTTCS.2025.34,
  author =	{Grigorescu, Elena and Jha, Vatsal and Samperton, Eric},
  title =	{{On the Hardness of Approximating Distances of Quantum Codes}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{34:1--34:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.34},
  URN =		{urn:nbn:de:0030-drops-251152},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.34},
  annote =	{Keywords: quantum codes, minimum distance problem, NP-hardness, graph state distance}
}
Document
Scalable Learning of One-Counter Automata via State-Merging Algorithms

Authors: Shibashis Guha, Anirban Majumdar, Prince Mathew, and A.V. Sreejith


Abstract
We propose One-counter Positive Negative Inference (OPNI), a passive learning algorithm for deterministic real-time one-counter automata (DROCA). Inspired by the RPNI algorithm for regular languages, OPNI constructs a DROCA consistent with any given valid sample set. We further present a semi-algorithm for active learning of DROCA using OPNI, and provide an implementation of the approach. Our experimental results demonstrate that this approach scales more effectively than existing state-of-the-art algorithms. We also evaluate the performance of the proposed approach for learning visibly one-counter automata.

Cite as

Shibashis Guha, Anirban Majumdar, Prince Mathew, and A.V. Sreejith. Scalable Learning of One-Counter Automata via State-Merging Algorithms. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 35:1-35:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{guha_et_al:LIPIcs.FSTTCS.2025.35,
  author =	{Guha, Shibashis and Majumdar, Anirban and Mathew, Prince and Sreejith, A.V.},
  title =	{{Scalable Learning of One-Counter Automata via State-Merging Algorithms}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{35:1--35:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.35},
  URN =		{urn:nbn:de:0030-drops-251168},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.35},
  annote =	{Keywords: active learning, passive learning, one-counter automata, RPNI}
}
Document
Hardness of Finding Kings and Strong Kings

Authors: Ziad Ismaili Alaoui and Nikhil Mande


Abstract
A king in a directed graph is a vertex v such that every other vertex is reachable from v via a path of length at most 2. It is well known that every tournament (a complete graph where each edge has a direction) has at least one king. Our contributions in this work are: - We show that the query complexity of determining existence of a king in arbitrary n-vertex digraphs is Θ(n²). This is in stark contrast to the case where the input is a tournament, where Shen, Sheng, and Wu [SICOMP'03] showed that a king can be found in O(n^{3/2}) queries. - In an attempt to increase the "fairness" in the definition of tournament winners, Ho and Chang [IPL'03] defined a strong king to be a king k such that, for every v that dominates k, the number of length-2 paths from k to v is strictly larger than the number of length-2 paths from v to k. We show that the query complexity of finding a strong king in a tournament is Θ(n²). This answers a question of Biswas, Jayapaul, Raman, and Satti [DAM'22] in the negative. A key component in our proofs is the design of specific tournaments where every vertex is a king, and analyzing certain properties of these tournaments. We feel these constructions and properties are independently interesting and may lead to more interesting results about tournament solutions.

Cite as

Ziad Ismaili Alaoui and Nikhil Mande. Hardness of Finding Kings and Strong Kings. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 36:1-36:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ismailialaoui_et_al:LIPIcs.FSTTCS.2025.36,
  author =	{Ismaili Alaoui, Ziad and Mande, Nikhil},
  title =	{{Hardness of Finding Kings and Strong Kings}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{36:1--36:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.36},
  URN =		{urn:nbn:de:0030-drops-250856},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.36},
  annote =	{Keywords: Tournaments, kings, query complexity}
}
Document
Communication Complexity of Equality and Error-Correcting Codes

Authors: Dale Jacobs, John Jeang, Vladimir Podolskii, Morgan Prior, and Ilya Volkovich


Abstract
We study the public-coin randomized communication complexity of the equality function. The communication complexity of this function is known to be low when the error probability is constant and the players have access to many random bits. The complexity grows, however, if the allowed error probability and the amount of randomness are restricted. We show that public-coin randomized protocols for equality and error-correcting codes are essentially the same object. That is, given a protocol for equality, we can construct a code, and vice versa. We substantially extend the protocol-implies-code direction: any protocol computing a function with a large fooling set can be converted into an error-correcting code. As a corollary, we show that among functions with a fooling set of size s, equality on log s bits has the least randomized communication complexity, regardless of the restrictions on the error probability and the amount of randomness. Finally, we use the connection to error-correcting codes to analyze the randomized communication complexity of equality for varying restrictions on the error probability and the amount of randomness. In most cases, we provide tight bounds. We pinpoint the setting in which tight bounds are still unknown.

Cite as

Dale Jacobs, John Jeang, Vladimir Podolskii, Morgan Prior, and Ilya Volkovich. Communication Complexity of Equality and Error-Correcting Codes. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 37:1-37:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{jacobs_et_al:LIPIcs.FSTTCS.2025.37,
  author =	{Jacobs, Dale and Jeang, John and Podolskii, Vladimir and Prior, Morgan and Volkovich, Ilya},
  title =	{{Communication Complexity of Equality and Error-Correcting Codes}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{37:1--37:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.37},
  URN =		{urn:nbn:de:0030-drops-251175},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.37},
  annote =	{Keywords: communication complexity, randomized communication complexity, error-correcting codes}
}
Document
Fairness and Efficiency in Two-Sided Matching Markets

Authors: Pallavi Jain, Palash Jha, and Shubham Solanki


Abstract
We propose a new fairness notion, motivated by the practical challenge of allocating teaching assistants (TAs) to courses in a department. Each course requires a certain number of TAs and each TA has preferences over the courses they want to assist. Similarly, each course instructor has preferences over the TAs who applied for their course. We demand fairness and efficiency for both sides separately, giving rise to the following criteria: (i) every course gets the required number of TAs and the average utility of the assigned TAs meets a threshold; (ii) the allocation of courses to TAs is envy-free, where a TA envies another TA if the former prefers the latter’s course and has a higher or equal grade in that course. Note that the definition of envy-freeness here differs from the one in the literature, and we call it merit-based envy-freeness. We show that the problem of finding a merit-based envy-free and efficient matching is NP-hard even for very restricted settings, such as two courses and uniform valuations; constant degree, constant capacity of TAs for every course, valuations in the range {0,1,2,3}, identical valuations from TAs, and even more. To find tractable results, we consider some restricted instances, such as, strict valuation of TAs for courses, the difference between the number of positively valued TAs for a course and the capacity, the number of positively valued TAs/courses, types of valuation functions, and obtained some polynomial-time solvable cases, showing the contrast with intractable results. We further studied the problem in the paradigm of parameterized algorithms and designed some exact and approximation algorithms.

Cite as

Pallavi Jain, Palash Jha, and Shubham Solanki. Fairness and Efficiency in Two-Sided Matching Markets. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 38:1-38:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{jain_et_al:LIPIcs.FSTTCS.2025.38,
  author =	{Jain, Pallavi and Jha, Palash and Solanki, Shubham},
  title =	{{Fairness and Efficiency in Two-Sided Matching Markets}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{38:1--38:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.38},
  URN =		{urn:nbn:de:0030-drops-251186},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.38},
  annote =	{Keywords: Fair Matching, Envy-Freeness, Efficiency}
}
Document
Improved Approximation for Pathwidth One Vertex Deletion and Parameterized Complexity of Its Variants

Authors: Satyabrata Jana, Soumen Mandal, Ashutosh Rai, and Saket Saurabh


Abstract
The pathwidth of a graph is a measure of how path-like the graph is. The Pathwidth One Vertex Deletion (POVD) problem asks whether, given an undirected graph G and an integer k, one can delete at most k vertices from G so that the remaining graph has pathwidth at most one. This is a natural variation of the classical Feedback vertex Set (FVS) problem, where the deletion of at most k vertices results in a graph of treewidth at most one. In this work, we investigate POVD in the realm of approximation algorithms. We first design a 3-approximation algorithm for POVD running in polynomial time. Then, using this constant factor approximation algorithm, we obtain a randomized parameterized approximation algorithm for POVD running in time 𝒪^*((h_β)^k), that improves the fastest existing running times for approximation ratios in the range (1.76147,3). Here the constant h_β depends on the approximation factor β alone and has value 2^{(3-β)}, which lies in the range (1,2.3596), when β ∈ (1.76147,3). Taking inspiration from two extensively studied problems, namely Connected FVS and Independent FVS, we investigate two variations of the POVD problem from the perspective of parameterized algorithms. These variations are the connected variant, called Connected pathwidth One Vertex Deletion (CPOVD) and the independent variant, called Independent Pathwidth One Vertex Deletion (IPOVD). While in CPOVD the subgraph G[S] induced by the vertices to be deleted needs to be connected, in IPOVD it needs to be independent. Specifically, we show the following results. - CPOVD can be solved in {𝒪}^*(14^k) time and admits no polynomial kernel unless NP ⊆ {co-NP/poly}. - IPOVD can be solved in {𝒪}^*(7^k) time, and admits a kernel of size 𝒪(k³).

Cite as

Satyabrata Jana, Soumen Mandal, Ashutosh Rai, and Saket Saurabh. Improved Approximation for Pathwidth One Vertex Deletion and Parameterized Complexity of Its Variants. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 39:1-39:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{jana_et_al:LIPIcs.FSTTCS.2025.39,
  author =	{Jana, Satyabrata and Mandal, Soumen and Rai, Ashutosh and Saurabh, Saket},
  title =	{{Improved Approximation for Pathwidth One Vertex Deletion and Parameterized Complexity of Its Variants}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{39:1--39:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.39},
  URN =		{urn:nbn:de:0030-drops-251192},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.39},
  annote =	{Keywords: Pathwidth, Parameterized complexity, Approximation, Kernelization}
}
Document
Improved Upper Bounds on Multiflow-Multicut Gaps in Cactus Graphs

Authors: Sina Kalantarzadeh and Nikhil Kumar


Abstract
Given a set of source-sink pairs, the maximum multiflow problem asks for the largest total amount of flow that can be feasibly routed between them. The minimum multicut problem, which is dual to multiflow, seeks the lowest-cost set of edges whose removal disconnects all source-sink pairs. It is straightforward to see that the value of a minimum multicut is at least that of the corresponding maximum multiflow. The ratio between the two is known as the multiflow-multicut gap. The classical max-flow min-cut theorem tells us that this gap is exactly one when there is only a single source-sink pair. However, for multiple source-sink pairs, the gap can be arbitrarily large. In this work, we investigate the multiflow-multicut gap in cactus graphs, and establish the following results (i) tight upper bound of 1.5 for cycle (ii) an upper bound of 2 + 2/(ln 2) < 3.45 for general cactus graph (iii) tight upper bound of 2 for unicyclic graphs, where the graph contains exactly one cycle (iv) tight upper bound of 2 for path cactus graphs, where cycles are arranged along a single path. We develop novel generalizations of the classical rounding algorithm to establish our results.

Cite as

Sina Kalantarzadeh and Nikhil Kumar. Improved Upper Bounds on Multiflow-Multicut Gaps in Cactus Graphs. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 40:1-40:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kalantarzadeh_et_al:LIPIcs.FSTTCS.2025.40,
  author =	{Kalantarzadeh, Sina and Kumar, Nikhil},
  title =	{{Improved Upper Bounds on Multiflow-Multicut Gaps in Cactus Graphs}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{40:1--40:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.40},
  URN =		{urn:nbn:de:0030-drops-251205},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.40},
  annote =	{Keywords: Approximation Algorithms, Randomized Algorithms, Linear Programming, Graph Algorithms, Multicut, Multicommodity flow}
}
Document
Simultaneously Fair Allocation of Indivisible Items Across Multiple Dimensions

Authors: Yasushi Kawase, Bodhayan Roy, and Mohammad Azharuddin Sanpui


Abstract
This paper explores the fair allocation of indivisible items in a multidimensional setting, motivated by the need to address fairness in complex environments where agents assess bundles according to multiple criteria. Such multidimensional settings are not merely of theoretical interest but are central to many real-world applications. For example, cloud computing resources are evaluated based on multiple criteria such as CPU cores, memory, and network bandwidth. In such cases, traditional one-dimensional fairness notions fail to capture fairness across multiple attributes. To address these challenges, we study two relaxed variants of envy-freeness: weak simultaneously envy-free up to c goods (weak sEFc) and strong simultaneously envy-free up to c goods (strong sEFc), which accommodate the multidimensionality of agents’ preferences. Under the weak notion, for every pair of agents and for each dimension, any perceived envy can be eliminated by removing, if necessary, a different set of goods from the envied agent’s allocation. In contrast, the strong version requires selecting a single set of goods whose removal from the envied bundle simultaneously eliminates envy in every dimension. We provide upper and lower bounds on the relaxation parameter c that guarantee the existence of weak or strong sEFc allocations, where these bounds are independent of the total number of items. In addition, we present algorithms for checking whether a weak or strong sEFc allocation exists. Moreover, we establish NP-hardness results for checking the existence of weak sEF1 and strong sEF1 allocations.

Cite as

Yasushi Kawase, Bodhayan Roy, and Mohammad Azharuddin Sanpui. Simultaneously Fair Allocation of Indivisible Items Across Multiple Dimensions. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 41:1-41:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kawase_et_al:LIPIcs.FSTTCS.2025.41,
  author =	{Kawase, Yasushi and Roy, Bodhayan and Sanpui, Mohammad Azharuddin},
  title =	{{Simultaneously Fair Allocation of Indivisible Items Across Multiple Dimensions}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{41:1--41:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.41},
  URN =		{urn:nbn:de:0030-drops-251210},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.41},
  annote =	{Keywords: Fair allocation, Envy-free up to one good, Multi-dimensional criteria, Linear programming, NP-hardness}
}
Document
Degrees of Second and Higher-Order Polynomials

Authors: Donghyun Lim and Martin Ziegler


Abstract
Second-order polynomials generalize classical (=first-order) ones in allowing for additional variables that range over functions rather than values. We are motivated by their applications in higher-order computational complexity theory, extending for instance discrete classes (like P/FP or PSPACE/FPSPACE) to operators in Analysis [http://doi.org/10.1137/S0097539794263452], [http://doi.org/10.1145/2189778.2189780]. The degree subclassifies ordinary polynomial growth into linear, quadratic, cubic, etc. To similarly classify second-order polynomials, we (well-)define their degree by structural induction as an "arctic" first-order polynomial: a term/expression over integer variable D and operations + and ⋅ and binary max(). This generalized degree turns out to transform nicely under (now two kinds of) polynomial composition. As examples, we collect and determine the degrees of previous and new asymptotic analyses of algorithms and operators receiving function/oracle arguments. Then we motivate and introduce third-order polynomials and their degrees as arctic second-order polynomials, along with their transformations under three kinds of composition. Proceeding to fourth order and beyond yields a hierarchy, with characterization in Simply Typed Lambda Calculus.

Cite as

Donghyun Lim and Martin Ziegler. Degrees of Second and Higher-Order Polynomials. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 42:1-42:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{lim_et_al:LIPIcs.FSTTCS.2025.42,
  author =	{Lim, Donghyun and Ziegler, Martin},
  title =	{{Degrees of Second and Higher-Order Polynomials}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{42:1--42:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.42},
  URN =		{urn:nbn:de:0030-drops-251225},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.42},
  annote =	{Keywords: Logic in Computer Science, Higher Order Program Analysis, Asymptotic Type Theory}
}
Document
A Finer View of the Parameterized Landscape of Labeled Graph Contractions

Authors: Yashaswini Mathur and Prafullkumar Tale


Abstract
We study the Labeled Contractibility problem, where the input consists of two vertex-labeled graphs G and H, and the goal is to determine whether H can be obtained from G via a sequence of edge contractions. Lafond and Marchand [WADS 2025] initiated the parameterized complexity study of this problem, showing it to be W[1]-hard when parameterized by the number k of allowed contractions. They also proved that the problem is fixed-parameter tractable when parameterized by the tree-width tw of G, via an application of Courcelle’s theorem resulting in a non-constructive algorithm. In this work, we present a constructive fixed-parameter algorithm for Labeled Contractibility with running time 2^{𝒪(tw²)} ⋅ |V(G)|^{𝒪(1)}. We also prove that unless the Exponential Time Hypothesis ({ETH}) fails, it does not admit an algorithm running in time 2^{o(tw²)} ⋅ |V(G)|^{𝒪(1)}. This result adds Labeled Contractibility to a small list of problems that admit such a lower bound and matching algorithm. We further strengthen existing hardness results by showing that the problem remains NP-complete even when both input graphs have bounded maximum degree. We also investigate parameterizations by (k + δ(G)) where δ(G) denotes the degeneracy of G, and rule out the existence of subexponential-time algorithms. This answers question raised in Lafond and Marchand [WADS 2025]. We additionally provide an improved FPT algorithm with better dependence on (k + δ(G)) than previously known. Finally, we analyze a brute-force algorithm for Labeled Contractibility with running time |V(H)|^{𝒪(|V(G)|)}, and show that this running time is optimal under {ETH}.

Cite as

Yashaswini Mathur and Prafullkumar Tale. A Finer View of the Parameterized Landscape of Labeled Graph Contractions. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 43:1-43:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{mathur_et_al:LIPIcs.FSTTCS.2025.43,
  author =	{Mathur, Yashaswini and Tale, Prafullkumar},
  title =	{{A Finer View of the Parameterized Landscape of Labeled Graph Contractions}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{43:1--43:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.43},
  URN =		{urn:nbn:de:0030-drops-251237},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.43},
  annote =	{Keywords: Labeled Contraction, ETH Lower-bound, Treewidth, NP-hard}
}
Document
Fall-Through Semantics for Mitigating Timing-Based Side Channel Leaks

Authors: Aniket Mishra and Abhishek Bichhawat


Abstract
With the recent advent of exploits like Spectre and Meltdown, the mitigation of side-channel attacks has become an important concern for security researchers. In this paper, we focus on timing-based side channels introduced through conditional branching on secret information within programs. We introduce a language that allows a programmer to write conditionals branching on secrets within its syntax, but has a semantics that keeps execution time constant with respect to an adversary under an observationally equivalent memory. We differ from other approaches that use program analysis methods, opting instead to modify the operational semantics to enforce the necessary properties. We formalize the semantics for our language with timing leak mitigations in Rocq (previously, Coq) and prove that these semantics satisfy the property of timing-sensitive non-interference. Since our system describes a mitigation approach for timing leaks in a general high-level imperative language, we believe that our semantics can be used as a basis for compiler construction for other high-level imperative languages that seek to be safe from timing side channels.

Cite as

Aniket Mishra and Abhishek Bichhawat. Fall-Through Semantics for Mitigating Timing-Based Side Channel Leaks. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 44:1-44:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{mishra_et_al:LIPIcs.FSTTCS.2025.44,
  author =	{Mishra, Aniket and Bichhawat, Abhishek},
  title =	{{Fall-Through Semantics for Mitigating Timing-Based Side Channel Leaks}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{44:1--44:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.44},
  URN =		{urn:nbn:de:0030-drops-251249},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.44},
  annote =	{Keywords: Timing leaks, information flow control, runtime monitor, type system, side-channel attacks}
}
Document
A Characterization of Spartan Graphs and New Lower Bounds for Eternal Vertex Cover

Authors: Neeldhara Misra and Saraswati Girish Nanoti


Abstract
The eternal vertex cover game is played between an attacker and a defender on an undirected graph G. The defender identifies k vertices to position guards initially. The attacker, on their turn, attacks an edge e, and the defender must move a guard along e to defend the attack. The defender may move other guards as well, under the constraint that every guard moves at most once and to a neighboring vertex. The smallest number of guards required to defend attacks forever is called the eternal vertex cover number of G, denoted evc(G). For any graph G, evc(G) is at least mvc(G) (the vertex cover number of G). A graph is Spartan if evc(G) = mvc(G). It is known that a bipartite graph is Spartan if and only if every edge belongs to a perfect matching. We show that the only König graphs that are Spartan are the bipartite Spartan graphs. We also give new lower bounds for evc(G), generalizing a known lower bound based on cut vertices. We finally show a new matching-based characterization of all Spartan graphs.

Cite as

Neeldhara Misra and Saraswati Girish Nanoti. A Characterization of Spartan Graphs and New Lower Bounds for Eternal Vertex Cover. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 45:1-45:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{misra_et_al:LIPIcs.FSTTCS.2025.45,
  author =	{Misra, Neeldhara and Nanoti, Saraswati Girish},
  title =	{{A Characterization of Spartan Graphs and New Lower Bounds for Eternal Vertex Cover}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{45:1--45:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.45},
  URN =		{urn:nbn:de:0030-drops-251250},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.45},
  annote =	{Keywords: Eternal Vertex Cover, Vertex Cover, K\"{o}nig Graphs, Spartan Graphs, Matchings}
}
Document
Finding Small Dijoins in Transitive Closure Time

Authors: Chaitanya Nalam and Thatchaphol Saranurak


Abstract
We present a faster algorithm for finding a minimum dijoin, a smallest set of edges whose contraction makes a directed graph strongly connected. This problem has been studied since the 1960s [Seshu and Reed 1961] and is dual to finding a maximum sized family of disjoint dicuts [Lucchesi and Younger 1978]. Given a directed graph G with n vertices and m edges whose minimum dijoin has size d, our algorithm outputs both a minimum dijoin and a maximum sized family of disjoint dicuts in O(TC⋅ d) time, where TC = min(mn,n^ω) is the time to compute the transitive closure. This improves upon the state of the art of [Gabow 1993], which requires O(TC ⋅ min(m^{1/2},n^{2/3})) time when d = o(min(m^{1/2},n^{2/3})). Our result extends to finding a minimum weighted dijoin. We achieve this by observing that Frank’s algorithm [Frank 1981] can be sped up when warm-started with a 2-approximation solution, which we observed can be computed in near-linear time.

Cite as

Chaitanya Nalam and Thatchaphol Saranurak. Finding Small Dijoins in Transitive Closure Time. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 46:1-46:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{nalam_et_al:LIPIcs.FSTTCS.2025.46,
  author =	{Nalam, Chaitanya and Saranurak, Thatchaphol},
  title =	{{Finding Small Dijoins in Transitive Closure Time}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{46:1--46:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.46},
  URN =		{urn:nbn:de:0030-drops-251265},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.46},
  annote =	{Keywords: Graph algorithms, Dijoin, Submodular flow}
}
Document
The Algebraic Cost of a Boolean Sum

Authors: Ian Orzel, Srikanth Srinivasan, Sébastien Tavenas, and Amir Yehudayoff


Abstract
It is a well-known fact that the permanent polynomial is complete for the complexity class VNP, and it is largely suspected that the determinant does not share this property, despite its similar expression. We study the question of why the VNP-completeness proof of the permanent fails for the determinant. We isolate three fundamental properties that are sufficient to prove a polynomial sequence is VNP-hard, of which two are shared by both the permanent and the determinant. We proceed to show that the permanent satisfies the third property, which we refer to as the "cost of a boolean sum", while the determinant does not, showcasing the fundamental difference between the polynomial families. We further note that this differentiation also applies in the border complexity setting and that our results apply for counting complexity.

Cite as

Ian Orzel, Srikanth Srinivasan, Sébastien Tavenas, and Amir Yehudayoff. The Algebraic Cost of a Boolean Sum. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 47:1-47:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{orzel_et_al:LIPIcs.FSTTCS.2025.47,
  author =	{Orzel, Ian and Srinivasan, Srikanth and Tavenas, S\'{e}bastien and Yehudayoff, Amir},
  title =	{{The Algebraic Cost of a Boolean Sum}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{47:1--47:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.47},
  URN =		{urn:nbn:de:0030-drops-251271},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.47},
  annote =	{Keywords: Algebraic Complexity, Computational Complexity, Permanent, Determinant}
}
Document
On the Roots of Independence Polynomial: Quantifying the Gap

Authors: Om Prakash and Vikram Sharma


Abstract
The independence polynomial of a graph G is the generating polynomial corresponding to its independent sets of different sizes. More formally, if a_k(G) denotes the number of independent sets of G of size k then I(G,z) := ∑_k (-1)^k a_k(G) z^k. The study of evaluating I(G,z) has several deep connections to problems in combinatorics, complexity theory and statistical physics. Consequently, the roots of the independence polynomial have been studied in detail. In particular, many works have provided regions in the complex plane that are devoid of any roots of the polynomial. One of the first such results showed a lower bound on the absolute value of the smallest root β(G) of the polynomial. Furthermore, when G is connected, Goldwurm and Santini established that β(G) is a simple real root of I(G,z) smaller than one. An alternative proof was given by Csikvári. Both proofs do not provide a gap from β(G) to the smallest absolute value amongst all the other roots of I(G,z). In this paper, we quantify this gap.

Cite as

Om Prakash and Vikram Sharma. On the Roots of Independence Polynomial: Quantifying the Gap. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 48:1-48:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{prakash_et_al:LIPIcs.FSTTCS.2025.48,
  author =	{Prakash, Om and Sharma, Vikram},
  title =	{{On the Roots of Independence Polynomial: Quantifying the Gap}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{48:1--48:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.48},
  URN =		{urn:nbn:de:0030-drops-251281},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.48},
  annote =	{Keywords: Independence Polynomial, Root separation, Zero-free regions}
}
Document
On the Hardness of Order Finding and Equivalence Testing for ROABPs

Authors: C. Ramya and Pratik Shastri


Abstract
The complexity of representing a polynomial by a Read-Once Oblivious Algebraic Branching Program (ROABP) is highly dependent on the chosen variable ordering. Bhargava et al. [Bhargava et al., 2024] prove that finding the optimal ordering is NP-hard, and provide some evidence (based on the Small Set Expansion hypothesis) that it is also hard to approximate the optimal ROABP width. In another work, Baraskar et al. [Baraskar et al., 2024] show that it is NP-hard to test whether a polynomial is in the GL_n orbit of a polynomial of sparsity at most s. Building upon these works, we show the following results: first, we prove that approximating the minimum ROABP width up to any constant factor is NP-hard, when the input is presented as a circuit. This removes the reliance on stronger conjectures in the previous work [Bhargava et al., 2024]. Second, we show that testing if an input polynomial given in the sparse representation is in the affine GL_n orbit of a width-w ROABP is NP-hard. Furthermore, we show that over fields of characteristic 0, the problem is NP-hard even when the input polynomial is homogeneous. This provides the first NP-hardness results for membership testing for a dense subclass of polynomial sized algebraic branching programs (VBP). Finally, we locate the source of hardness for the order finding problem at the lowest possible non-trivial degree, proving that the problem is NP-hard even for quadratic forms.

Cite as

C. Ramya and Pratik Shastri. On the Hardness of Order Finding and Equivalence Testing for ROABPs. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 49:1-49:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ramya_et_al:LIPIcs.FSTTCS.2025.49,
  author =	{Ramya, C. and Shastri, Pratik},
  title =	{{On the Hardness of Order Finding and Equivalence Testing for ROABPs}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{49:1--49:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.49},
  URN =		{urn:nbn:de:0030-drops-251296},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.49},
  annote =	{Keywords: ROABP, Order Finding, Equivalence Testing, NP-hardness, Hardness of Approximation}
}
Document
Additive, Near-Additive, and Multiplicative Approximations for APSP in Weighted Undirected Graphs: Trade-Offs and Algorithms

Authors: Liam Roditty and Ariel Sapir


Abstract
We present a +2∑_{i=1}^{k+1} W_i-APASP algorithm for dense weighted graphs with a runtime of Õ(n^{2+1/(3k+2)), where W_i is the weight of an i^th heaviest edge on a shortest path between two vertices. Dor, Halperin and Zwick [FOCS'96 and SICOMP'00] introduced two algorithms for the commensurate unweighted +2⋅ (k+1)-APASP problem: one for sparse graphs with a runtime of Õ(n^{2-1/(k+2)} m^{1/(k+2)}) and one for dense graphs with a runtime of Õ(n^{2+1/(3k+2)}). Subsequently, Cohen and Zwick [SODA'97 and JALG'01] adapted the algorithm for sparse graphs to the weighted setting, namely a +2∑_{i=1}^{k+1} W_i-APASP algorithm with the same Õ(n^{2-1/(k+2)} m^{1/(k+2)}) runtime. We fill the nearly three decades old gap by providing an algorithm for dense weighted graphs, matching the runtime for the unweighted setting. In addition, we explore nearly additive APASP, where the multiplicative stretch is 1+ε. We present a (1+ε, min{2W₁,4W₂})-APASP algorithm with a runtime of Õ((1/ε)^{O(1)} ⋅ n^{2.15135313} ⋅ log W). This improves upon Saha and Ye [SODA'24], which had the same runtime, yet (1+ε, 2W₁)-APASP. For pure multiplicative APASP, we present a (7/3+ε)-APASP algorithm with a runtime of Õ((1/ε)^{O(1)} ⋅ n^{2.15135313} ⋅ log W). This improves, for dense graphs, the Õ(nm^{2/3}+n²) runtime of the 7/3-APASP algorithm by Baswana and Kavitha [FOCS'06 and SICOMP'10], at the cost of introducing an additional ε to the multiplicative stretch. We further view this result in a broader framework of ((3𝓁+4)/(𝓁+2) + ε)-APASP algorithms, similarly to the family of (3𝓁+4)/(𝓁+2)-APASP algorithms by Akav and Roditty [ESA'21]. This also generalizes the (2+ε)-APASP algorithm by Dory, Forster, Kirkpatrick, Nazari, Vassilevska Williams, and de Vos [SODA'24]. Finally, we show that it is possible to "bypass" an Ω̃ (n^ω) conditional lower bound by Dor, Halperin, and Zwick for α-APASP with α < 2, by allowing an additive component to the approximation (e.g. a ((6k+3)/(3k+2),∑_{i=1}^{k+1} W_i)-APASP with Õ(n^{2+1/(3k+2)}) runtime.).

Cite as

Liam Roditty and Ariel Sapir. Additive, Near-Additive, and Multiplicative Approximations for APSP in Weighted Undirected Graphs: Trade-Offs and Algorithms. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 50:1-50:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{roditty_et_al:LIPIcs.FSTTCS.2025.50,
  author =	{Roditty, Liam and Sapir, Ariel},
  title =	{{Additive, Near-Additive, and Multiplicative Approximations for APSP in Weighted Undirected Graphs: Trade-Offs and Algorithms}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{50:1--50:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.50},
  URN =		{urn:nbn:de:0030-drops-251309},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.50},
  annote =	{Keywords: Graph, Shortest Paths, Weighted Graphs, Approximation, Undirected, Single Source Shortest-Paths, Multi-Source Shortest-Paths, All-Pairs Shortest-Paths, SSSP, MSSP, MSASP, APSP, APASP}
}
Document
Randomized Black-Box PIT for Small Depth +-Regular Non-Commutative Circuits

Authors: G. V. Sumukha Bharadwaj and S. Raja


Abstract
In this paper, we address the black-box polynomial identity testing (PIT) problem for non-commutative polynomials computed by +-regular circuits, a class of homogeneous circuits introduced by Arvind, Joglekar, Mukhopadhyay, and Raja (STOC 2017, Theory of Computing 2019). These circuits can compute polynomials with a number of monomials that are doubly exponential in the circuit size. They gave an efficient randomized PIT algorithm for +-regular circuits of depth 3 and posed the problem of developing an efficient black-box PIT for higher depths as an open problem. Our work makes progress on this open problem by resolving it for constant-depth +-regular circuits. We present a randomized black-box polynomial-time algorithm for +-regular circuits of any constant depth. Specifically, our algorithm runs in s^{O(d²)} time, where s and d represent the size and the depth of the +-regular circuit, respectively. Our approach combines several key techniques in a novel way. We employ a nondeterministic substitution automaton that transforms the polynomial into a structured form and utilizes polynomial sparsification along with commutative transformations to maintain non-zeroness. Additionally, we introduce matrix composition, coefficient modification via the automaton, and multi-entry outputs - methods that have not previously been applied in the context of black-box PIT. Together, these techniques enable us to effectively handle exponential degrees and doubly exponential sparsity in non-commutative settings, enabling polynomial identity testing for higher-depth circuits. In particular, we show that if f is a non-zero non-commutative polynomial in n variables over the field 𝔽, computed by a depth-d +-regular circuit of size s, then f cannot be a polynomial identity for the matrix algebra 𝕄_{N}(𝔽), where N = s^{O(d²)} and the size of the field 𝔽 depends on the degree of f. Interestingly, the size of the matrices does not depend on the degree of f. Our result can be interpreted as an Amitsur-Levitzki-type result [Amitsur and Levitzki, 1950] for polynomials computed by small-depth +-regular circuits.

Cite as

G. V. Sumukha Bharadwaj and S. Raja. Randomized Black-Box PIT for Small Depth +-Regular Non-Commutative Circuits. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 51:1-51:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{sumukhabharadwaj_et_al:LIPIcs.FSTTCS.2025.51,
  author =	{Sumukha Bharadwaj, G. V. and Raja, S.},
  title =	{{Randomized Black-Box PIT for Small Depth +-Regular Non-Commutative Circuits}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{51:1--51:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.51},
  URN =		{urn:nbn:de:0030-drops-250949},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.51},
  annote =	{Keywords: Polynomial Identity Testing, Non-commutative Circuits, Algebraic Circuits, +-Regular Circuits, Black-Box}
}
Document
Unreliability in Practical Subclasses of Communicating Systems

Authors: Amrita Suresh and Nobuko Yoshida


Abstract
Systems of communicating automata are prominent models for peer-to-peer message-passing over unbounded channels, but in the general scenario, most verification properties are undecidable. To address this issue, two decidable subclasses, Realisable with Synchronous Communication (RSC) and k-Multiparty Compatibility (k-MC), were proposed in the literature, with corresponding verification tools developed and applied in practice. Unfortunately, both RSC and k-MC are not resilient under failures: (1) their decidability relies on the assumption of perfect channels and (2) most standard protocols do not satisfy RSC or k-MC under failures. To address these limitations, this paper studies the resilience of RSC and k-MC under two distinct failure models: interference and crash-stop failures. For interference, we relax the conditions of RSC and k-MC and prove that the inclusions of these relaxed properties remain decidable under interference, preserving their known complexity bounds. We then propose a novel crash-handling communicating system that captures wider behaviours than existing multiparty session types (MPST) with crash-stop failures. We study a translation of MPST with crash-stop failures into this system integrating RSC and k-MC properties, and establish their decidability results. Finally, by verifying representative protocols from the literature using RSC and k-MC tools extended to interferences, we evaluate the relaxed systems and demonstrate their resilience.

Cite as

Amrita Suresh and Nobuko Yoshida. Unreliability in Practical Subclasses of Communicating Systems. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 52:1-52:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{suresh_et_al:LIPIcs.FSTTCS.2025.52,
  author =	{Suresh, Amrita and Yoshida, Nobuko},
  title =	{{Unreliability in Practical Subclasses of Communicating Systems}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{52:1--52:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.52},
  URN =		{urn:nbn:de:0030-drops-251312},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.52},
  annote =	{Keywords: Communicating automata, lossy channel, corruption, out of order, session types, crash-stop failure}
}

Filters


Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail