54 Search Results for "Ball, Thomas"


Volume

LIPIcs, Volume 32

1st Summit on Advances in Programming Languages (SNAPL 2015)

SNAPL 2015, May 3-6, 2015, Asilomar, California, US

Editors: Thomas Ball, Rastislav Bodík, Shriram Krishnamurthi, Benjamin S. Lerner, and Greg Morriset

Document
APPROX
Bipartizing (Pseudo-)Disk Graphs: Approximation with a Ratio Better than 3

Authors: Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, Jie Xue, and Meirav Zehavi

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
In a disk graph, every vertex corresponds to a disk in ℝ² and two vertices are connected by an edge whenever the two corresponding disks intersect. Disk graphs form an important class of geometric intersection graphs, which generalizes both planar graphs and unit-disk graphs. We study a fundamental optimization problem in algorithmic graph theory, Bipartization (also known as Odd Cycle Transversal), on the class of disk graphs. The goal of Bipartization is to delete a minimum number of vertices from the input graph such that the resulting graph is bipartite. A folklore (polynomial-time) 3-approximation algorithm for Bipartization on disk graphs follows from the classical framework of Goemans and Williamson [Combinatorica'98] for cycle-hitting problems. For over two decades, this result has remained the best known approximation for the problem (in fact, even for Bipartization on unit-disk graphs). In this paper, we achieve the first improvement upon this result, by giving a (3-α)-approximation algorithm for Bipartization on disk graphs, for some constant α > 0. Our algorithm directly generalizes to the broader class of pseudo-disk graphs. Furthermore, our algorithm is robust in the sense that it does not require a geometric realization of the input graph to be given.

Cite as

Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, Jie Xue, and Meirav Zehavi. Bipartizing (Pseudo-)Disk Graphs: Approximation with a Ratio Better than 3. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 6:1-6:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{lokshtanov_et_al:LIPIcs.APPROX/RANDOM.2024.6,
  author =	{Lokshtanov, Daniel and Panolan, Fahad and Saurabh, Saket and Xue, Jie and Zehavi, Meirav},
  title =	{{Bipartizing (Pseudo-)Disk Graphs: Approximation with a Ratio Better than 3}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{6:1--6:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.6},
  URN =		{urn:nbn:de:0030-drops-209990},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.6},
  annote =	{Keywords: bipartization, geometric intersection graphs, approximation algorithms}
}
Document
Mover Logic: A Concurrent Program Logic for Reduction and Rely-Guarantee Reasoning

Authors: Cormac Flanagan and Stephen N. Freund

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Rely-guarantee (RG) logic uses thread interference specifications (relies and guarantees) to reason about the correctness of multithreaded software. Unfortunately, RG logic requires each function postcondition to be "stabilized" or specialized to the behavior of other threads, making it difficult to write function specifications that are reusable at multiple call sites. This paper presents mover logic, which extends RG logic to address this problem via the notion of atomic functions. Atomic functions behave as if they execute serially without interference from concurrent threads, and so they can be assigned more general and reusable specifications that avoid the stabilization requirement of RG logic. Several practical verifiers (Calvin-R, QED, CIVL, Armada, Anchor, etc.) have demonstrated the modularity benefits of atomic function specifications. However, the complexity of these systems and their correctness proofs makes it challenging to understand and extend these systems. Mover logic formalizes the central ideas of reduction in a declarative program logic that provides a foundation for future work in this area.

Cite as

Cormac Flanagan and Stephen N. Freund. Mover Logic: A Concurrent Program Logic for Reduction and Rely-Guarantee Reasoning. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 16:1-16:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{flanagan_et_al:LIPIcs.ECOOP.2024.16,
  author =	{Flanagan, Cormac and Freund, Stephen N.},
  title =	{{Mover Logic: A Concurrent Program Logic for Reduction and Rely-Guarantee Reasoning}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{16:1--16:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.16},
  URN =		{urn:nbn:de:0030-drops-208654},
  doi =		{10.4230/LIPIcs.ECOOP.2024.16},
  annote =	{Keywords: concurrent program verification, reduction, rely-guarantee reasoning, synchronization}
}
Document
Fearless Asynchronous Communications with Timed Multiparty Session Protocols

Authors: Ping Hou, Nicolas Lagaillardie, and Nobuko Yoshida

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Session types using affinity and exception handling mechanisms have been developed to ensure the communication safety of protocols implemented in concurrent and distributed programming languages. Nevertheless, current affine session types are inadequate for specifying real-world asynchronous protocols, as they are usually imposed by time constraints which enable timeout exceptions to prevent indefinite blocking while awaiting valid messages. This paper proposes the first formal integration of affinity, time constraints, timeouts, and time-failure handling based on multiparty session types for supporting reliability in asynchronous distributed systems. With this theory, we statically guarantee that asynchronous timed communication is deadlock-free, communication safe, while being fearless - never hindered by timeout errors or abrupt terminations. To implement our theory, we introduce MultiCrusty^T, a Rust toolchain designed to facilitate the implementation of safe affine timed protocols. MultiCrusty^T leverages generic types and the time library to handle timed communications, integrated with optional types for affinity. We evaluate MultiCrusty^T by extending diverse examples from the literature to incorporate time and timeouts. We also showcase the correctness by construction of our approach by implementing various real-world use cases, including protocols from the Internet of Remote Things domain and real-time systems.

Cite as

Ping Hou, Nicolas Lagaillardie, and Nobuko Yoshida. Fearless Asynchronous Communications with Timed Multiparty Session Protocols. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 19:1-19:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hou_et_al:LIPIcs.ECOOP.2024.19,
  author =	{Hou, Ping and Lagaillardie, Nicolas and Yoshida, Nobuko},
  title =	{{Fearless Asynchronous Communications with Timed Multiparty Session Protocols}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{19:1--19:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.19},
  URN =		{urn:nbn:de:0030-drops-208681},
  doi =		{10.4230/LIPIcs.ECOOP.2024.19},
  annote =	{Keywords: Session Types, Concurrency, Time Failure Handling, Affinity, Timeout, Rust}
}
Document
Formalizing, Mechanizing, and Verifying Class-Based Refinement Types

Authors: Ke Sun, Di Wang, Sheng Chen, Meng Wang, and Dan Hao

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Refinement types have been extensively used in class-based languages to specify and verify fine-grained logical specifications. Despite the advances in practical aspects such as applicability and usability, two fundamental issues persist. First, the soundness of existing class-based refinement type systems is inadequately explored, casting doubts on their reliability. Second, the expressiveness of existing systems is limited, restricting the depiction of semantic properties related to object-oriented constructs. This work tackles these issues through a systematic framework. We formalize a declarative class-based refinement type calculus (named RFJ), that is expressive and concise. We rigorously develop the soundness meta-theory of this calculus, followed by its mechanization in Coq. Finally, to ensure the calculus’s verifiability, we propose an algorithmic verification approach based on a fragment of first-order logic (named LFJ), and implement this approach as a type checker.

Cite as

Ke Sun, Di Wang, Sheng Chen, Meng Wang, and Dan Hao. Formalizing, Mechanizing, and Verifying Class-Based Refinement Types. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 39:1-39:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{sun_et_al:LIPIcs.ECOOP.2024.39,
  author =	{Sun, Ke and Wang, Di and Chen, Sheng and Wang, Meng and Hao, Dan},
  title =	{{Formalizing, Mechanizing, and Verifying Class-Based Refinement Types}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{39:1--39:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.39},
  URN =		{urn:nbn:de:0030-drops-208881},
  doi =		{10.4230/LIPIcs.ECOOP.2024.39},
  annote =	{Keywords: Refinement Types, Program Verification, Object-oriented Programming}
}
Document
Inductive Predicate Synthesis Modulo Programs

Authors: Scott Wesley, Maria Christakis, Jorge A. Navas, Richard Trefler, Valentin Wüstholz, and Arie Gurfinkel

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
A growing trend in program analysis is to encode verification conditions within the language of the input program. This simplifies the design of analysis tools by utilizing off-the-shelf verifiers, but makes communication with the underlying solver more challenging. Essentially, the analysis tools operates at the level of input programs, whereas the solver operates at the level of problem encodings. To bridge this gap, the verifier must pass along proof-rules from the analysis tool to the solver. For example, an analysis tool for concurrent programs built on an inductive program verifier might need to declare Owicki-Gries style proof-rules for the underlying solver. Each such proof-rule further specifies how a program should be verified, meaning that the problem of passing proof-rules is a form of invariant synthesis. Similarly, many program analysis tasks reduce to the synthesis of pure, loop-free Boolean functions (i.e., predicates), relative to a program. From this observation, we propose Inductive Predicate Synthesis Modulo Programs (IPS-MP) which extends high-level languages with minimal synthesis features to guide analysis. In IPS-MP, unknown predicates appear under assume and assert statements, acting as specifications modulo the program semantics. Existing synthesis solvers are inefficient at IPS-MP as they target more general problems. In this paper, we show that IPS-MP admits an efficient solution in the Boolean case, despite being generally undecidable. Moreover, we show that IPS-MP reduces to the satisfiability of constrained Horn clauses, which is less general than existing synthesis problems, yet expressive enough to encode verification tasks. We provide reductions from challenging verification tasks - such as parameterized model checking - to IPS-MP. We realize these reductions with an efficient IPS-MP-solver based on SeaHorn, and describe a real-world application to smart-contract verification.

Cite as

Scott Wesley, Maria Christakis, Jorge A. Navas, Richard Trefler, Valentin Wüstholz, and Arie Gurfinkel. Inductive Predicate Synthesis Modulo Programs. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 43:1-43:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{wesley_et_al:LIPIcs.ECOOP.2024.43,
  author =	{Wesley, Scott and Christakis, Maria and Navas, Jorge A. and Trefler, Richard and W\"{u}stholz, Valentin and Gurfinkel, Arie},
  title =	{{Inductive Predicate Synthesis Modulo Programs}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{43:1--43:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.43},
  URN =		{urn:nbn:de:0030-drops-208926},
  doi =		{10.4230/LIPIcs.ECOOP.2024.43},
  annote =	{Keywords: Software Verification, Invariant Synthesis, Model-Checking}
}
Document
Passive Learning of Regular Data Languages in Polynomial Time and Data

Authors: Mrudula Balachander, Emmanuel Filiot, and Raffaella Gentilini

Published in: LIPIcs, Volume 311, 35th International Conference on Concurrency Theory (CONCUR 2024)


Abstract
A regular data language is a language over an infinite alphabet recognized by a deterministic register automaton (DRA), as defined by Benedikt, Ley and Puppis. The later model, which is expressively equivalent to the deterministic finite-memory automata introduced earlier by Francez and Kaminsky, enjoys unique minimal automata (up to isomorphism), based on a Myhill-Nerode theorem. In this paper, we introduce a polynomial time passive learning algorithm for regular data languages from positive and negative samples. Following Gold’s model for learning languages, we prove that our algorithm can identify in the limit any regular data language L, i.e. it returns a minimal DRA recognizing L if a characteristic sample set for L is provided as input. We prove that there exist characteristic sample sets of polynomial size with respect to the size of the minimal DRA recognizing L. To the best of our knowledge, it is the first passive learning algorithm for data languages, and the first learning algorithm which is fully polynomial, both with respect to time complexity and size of the characteristic sample set.

Cite as

Mrudula Balachander, Emmanuel Filiot, and Raffaella Gentilini. Passive Learning of Regular Data Languages in Polynomial Time and Data. In 35th International Conference on Concurrency Theory (CONCUR 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 311, pp. 10:1-10:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{balachander_et_al:LIPIcs.CONCUR.2024.10,
  author =	{Balachander, Mrudula and Filiot, Emmanuel and Gentilini, Raffaella},
  title =	{{Passive Learning of Regular Data Languages in Polynomial Time and Data}},
  booktitle =	{35th International Conference on Concurrency Theory (CONCUR 2024)},
  pages =	{10:1--10:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-339-3},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{311},
  editor =	{Majumdar, Rupak and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2024.10},
  URN =		{urn:nbn:de:0030-drops-207829},
  doi =		{10.4230/LIPIcs.CONCUR.2024.10},
  annote =	{Keywords: Register automata, passive learning, automata over infinite alphabets}
}
Document
Effect Semantics for Quantum Process Calculi

Authors: Lorenzo Ceragioli, Fabio Gadducci, Giuseppe Lomurno, and Gabriele Tedeschi

Published in: LIPIcs, Volume 311, 35th International Conference on Concurrency Theory (CONCUR 2024)


Abstract
The development of quantum communication protocols sparked the interest in quantum extensions of process calculi and behavioural equivalences, but defining a bisimilarity that matches the observational properties of a quantum-capable system is a surprisingly difficult task. The two proposals explicitly addressing this issue, qCCS and lqCCS, do not define an algorithmic verification scheme: the bisimilarity of two processes is proven by comparing their behaviour under all input states. We introduce a new semantic model based on effects, i.e. probabilistic predicates on quantum states that represent their observable properties. We define and investigate the properties of effect distributions and effect labelled transition systems (eLTSs), generalizing probability distributions and probabilistic labelled transition systems (pLTSs), respectively. As a proof of concept, we provide an eLTS-based semantics for a minimal quantum process algebra, which we prove sound and complete with respect to the observable probabilistic behaviour of quantum processes. To the best of our knowledge, ours is the first algorithmically verifiable proposal that abides to the properties of quantum theory.

Cite as

Lorenzo Ceragioli, Fabio Gadducci, Giuseppe Lomurno, and Gabriele Tedeschi. Effect Semantics for Quantum Process Calculi. In 35th International Conference on Concurrency Theory (CONCUR 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 311, pp. 16:1-16:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ceragioli_et_al:LIPIcs.CONCUR.2024.16,
  author =	{Ceragioli, Lorenzo and Gadducci, Fabio and Lomurno, Giuseppe and Tedeschi, Gabriele},
  title =	{{Effect Semantics for Quantum Process Calculi}},
  booktitle =	{35th International Conference on Concurrency Theory (CONCUR 2024)},
  pages =	{16:1--16:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-339-3},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{311},
  editor =	{Majumdar, Rupak and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2024.16},
  URN =		{urn:nbn:de:0030-drops-207886},
  doi =		{10.4230/LIPIcs.CONCUR.2024.16},
  annote =	{Keywords: Quantum process calculi, probabilistic LTSs, effect LTSs}
}
Document
Anytime Approximate Formal Feature Attribution

Authors: Jinqiang Yu, Graham Farr, Alexey Ignatiev, and Peter J. Stuckey

Published in: LIPIcs, Volume 305, 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)


Abstract
Widespread use of artificial intelligence (AI) algorithms and machine learning (ML) models on the one hand and a number of crucial issues pertaining to them warrant the need for explainable artificial intelligence (XAI). A key explainability question is: given this decision was made, what are the input features which contributed to the decision? Although a range of XAI approaches exist to tackle this problem, most of them have significant limitations. Heuristic XAI approaches suffer from the lack of quality guarantees, and often try to approximate Shapley values, which is not the same as explaining which features contribute to a decision. A recent alternative is so-called formal feature attribution (FFA), which defines feature importance as the fraction of formal abductive explanations (AXp’s) containing the given feature. This measures feature importance from the view of formally reasoning about the model’s behavior. Namely, given a system of constraints logically representing the ML model of interest, computing an AXp requires finding a minimal unsatisfiable subset (MUS) of the system. It is challenging to compute FFA using its definition because that involves counting over all AXp’s (equivalently, counting over MUSes), although one can approximate it. Based on these results, this paper makes several contributions. First, it gives compelling evidence that computing FFA is intractable, even if the set of contrastive formal explanations (CXp’s), which correspond to minimal correction subsets (MCSes) of the logical system, is provided, by proving that the problem is #P-hard. Second, by using the duality between MUSes and MCSes, it proposes an efficient heuristic to switch from MCS enumeration to MUS enumeration on-the-fly resulting in an adaptive explanation enumeration algorithm effectively approximating FFA in an anytime fashion. Finally, experimental results obtained on a range of widely used datasets demonstrate the effectiveness of the proposed FFA approximation approach in terms of the error of FFA approximation as well as the number of explanations computed and their diversity given a fixed time limit.

Cite as

Jinqiang Yu, Graham Farr, Alexey Ignatiev, and Peter J. Stuckey. Anytime Approximate Formal Feature Attribution. In 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 305, pp. 30:1-30:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{yu_et_al:LIPIcs.SAT.2024.30,
  author =	{Yu, Jinqiang and Farr, Graham and Ignatiev, Alexey and Stuckey, Peter J.},
  title =	{{Anytime Approximate Formal Feature Attribution}},
  booktitle =	{27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)},
  pages =	{30:1--30:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-334-8},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{305},
  editor =	{Chakraborty, Supratik and Jiang, Jie-Hong Roland},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2024.30},
  URN =		{urn:nbn:de:0030-drops-205526},
  doi =		{10.4230/LIPIcs.SAT.2024.30},
  annote =	{Keywords: Explainable AI, Formal Feature Attribution, Minimal Unsatisfiable Subsets, MUS Enumeration}
}
Document
Track A: Algorithms, Complexity and Games
Exploiting Automorphisms of Temporal Graphs for Fast Exploration and Rendezvous

Authors: Konstantinos Dogeas, Thomas Erlebach, Frank Kammer, Johannes Meintrup, and William K. Moses Jr.

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Temporal graphs are dynamic graphs where the edge set can change in each time step, while the vertex set stays the same. Exploration of temporal graphs whose snapshot in each time step is a connected graph, called connected temporal graphs, has been widely studied. In this paper, we extend the concept of graph automorphisms from static graphs to temporal graphs and show for the first time that symmetries enable faster exploration: We prove that a connected temporal graph with n vertices and orbit number r (i.e., r is the number of automorphism orbits) can be explored in O(r n^{1+ε}) time steps, for any fixed ε > 0. For r = O(n^c) for constant c < 1, this is a significant improvement over the known tight worst-case bound of Θ(n²) time steps for arbitrary connected temporal graphs. We also give two lower bounds for temporal exploration, showing that Ω(n log n) time steps are required for some inputs with r = O(1) and that Ω(rn) time steps are required for some inputs for any r with 1 ≤ r ≤ n. Moreover, we show that the techniques we develop for fast exploration can be used to derive the following result for rendezvous: Two agents with different programs and without communication ability are placed by an adversary at arbitrary vertices and given full information about the connected temporal graph, except that they do not have consistent vertex labels. Then the two agents can meet at a common vertex after O(n^{1+ε}) time steps, for any constant ε > 0. For some connected temporal graphs with the orbit number being a constant, we also present a complementary lower bound of Ω(nlog n) time steps.

Cite as

Konstantinos Dogeas, Thomas Erlebach, Frank Kammer, Johannes Meintrup, and William K. Moses Jr.. Exploiting Automorphisms of Temporal Graphs for Fast Exploration and Rendezvous. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 55:1-55:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dogeas_et_al:LIPIcs.ICALP.2024.55,
  author =	{Dogeas, Konstantinos and Erlebach, Thomas and Kammer, Frank and Meintrup, Johannes and Moses Jr., William K.},
  title =	{{Exploiting Automorphisms of Temporal Graphs for Fast Exploration and Rendezvous}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{55:1--55:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.55},
  URN =		{urn:nbn:de:0030-drops-201989},
  doi =		{10.4230/LIPIcs.ICALP.2024.55},
  annote =	{Keywords: dynamic graphs, parameterized algorithms, algorithmic graph theory, graph automorphism, orbit number}
}
Document
Track A: Algorithms, Complexity and Games
Better Sparsifiers for Directed Eulerian Graphs

Authors: Sushant Sachdeva, Anvith Thudi, and Yibin Zhao

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Spectral sparsification for directed Eulerian graphs is a key component in the design of fast algorithms for solving directed Laplacian linear systems. Directed Laplacian linear system solvers are crucial algorithmic primitives to fast computation of fundamental problems on random walks, such as computing stationary distributions, hitting and commute times, and personalized PageRank vectors. While spectral sparsification is well understood for undirected graphs and it is known that for every graph G, (1+ε)-sparsifiers with O(nε^{-2}) edges exist [Batson-Spielman-Srivastava, STOC '09] (which is optimal), the best known constructions of Eulerian sparsifiers require Ω(nε^{-2}log⁴ n) edges and are based on short-cycle decompositions [Chu et al., FOCS '18]. In this paper, we give improved constructions of Eulerian sparsifiers, specifically: 1) We show that for every directed Eulerian graph G→, there exists an Eulerian sparsifier with O(nε^{-2} log² n log²log n + nε^{-4/3}log^{8/3} n) edges. This result is based on combining short-cycle decompositions [Chu-Gao-Peng-Sachdeva-Sawlani-Wang, FOCS '18, SICOMP] and [Parter-Yogev, ICALP '19], with recent progress on the matrix Spencer conjecture [Bansal-Meka-Jiang, STOC '23]. 2) We give an improved analysis of the constructions based on short-cycle decompositions, giving an m^{1+δ}-time algorithm for any constant δ > 0 for constructing Eulerian sparsifiers with O(nε^{-2}log³ n) edges.

Cite as

Sushant Sachdeva, Anvith Thudi, and Yibin Zhao. Better Sparsifiers for Directed Eulerian Graphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 119:1-119:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{sachdeva_et_al:LIPIcs.ICALP.2024.119,
  author =	{Sachdeva, Sushant and Thudi, Anvith and Zhao, Yibin},
  title =	{{Better Sparsifiers for Directed Eulerian Graphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{119:1--119:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.119},
  URN =		{urn:nbn:de:0030-drops-202628},
  doi =		{10.4230/LIPIcs.ICALP.2024.119},
  annote =	{Keywords: Graph algorithms, Linear algebra and computation, Discrepancy theory}
}
Document
The Localized Union-Of-Balls Bifiltration

Authors: Michael Kerber and Matthias Söls

Published in: LIPIcs, Volume 258, 39th International Symposium on Computational Geometry (SoCG 2023)


Abstract
We propose an extension of the classical union-of-balls filtration of persistent homology: fixing a point q, we focus our attention to a ball centered at q whose radius is controlled by a second scale parameter. We discuss an absolute variant, where the union is just restricted to the q-ball, and a relative variant where the homology of the q-ball relative to its boundary is considered. Interestingly, these natural constructions lead to bifiltered simplicial complexes which are not k-critical for any finite k. Nevertheless, we demonstrate that these bifiltrations can be computed exactly and efficiently, and we provide a prototypical implementation using the CGAL library. We also argue that some of the recent algorithmic advances for 2-parameter persistence (which usually assume k-criticality for some finite k) carry over to the ∞-critical case.

Cite as

Michael Kerber and Matthias Söls. The Localized Union-Of-Balls Bifiltration. In 39th International Symposium on Computational Geometry (SoCG 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 258, pp. 45:1-45:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kerber_et_al:LIPIcs.SoCG.2023.45,
  author =	{Kerber, Michael and S\"{o}ls, Matthias},
  title =	{{The Localized Union-Of-Balls Bifiltration}},
  booktitle =	{39th International Symposium on Computational Geometry (SoCG 2023)},
  pages =	{45:1--45:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-273-0},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{258},
  editor =	{Chambers, Erin W. and Gudmundsson, Joachim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2023.45},
  URN =		{urn:nbn:de:0030-drops-178953},
  doi =		{10.4230/LIPIcs.SoCG.2023.45},
  annote =	{Keywords: Topological Data Analysis, Multi-Parameter Persistence, Persistent Local Homology}
}
Document
Tight Bounds for Repeated Balls-Into-Bins

Authors: Dimitrios Los and Thomas Sauerwald

Published in: LIPIcs, Volume 254, 40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023)


Abstract
We study the repeated balls-into-bins process introduced by Becchetti, Clementi, Natale, Pasquale and Posta (2019). This process starts with m balls arbitrarily distributed across n bins. At each round t = 1,2,…, one ball is selected from each non-empty bin, and then placed it into a bin chosen independently and uniformly at random. We prove the following results: - For any n ⩽ m ⩽ poly(n), we prove a lower bound of Ω(m/n ⋅ log n) on the maximum load. For the special case m = n, this matches the upper bound of 𝒪(log n), as shown in [Luca Becchetti et al., 2019]. It also provides a positive answer to the conjecture in [Luca Becchetti et al., 2019] that for m = n the maximum load is ω(log n/ log log n) at least once in a polynomially large time interval. For m ∈ [ω(n), n log n], our new lower bound disproves the conjecture in [Luca Becchetti et al., 2019] that the maximum load remains 𝒪(log n). - For any n ⩽ m ⩽ poly(n), we prove an upper bound of 𝒪(m/n ⋅ log n) on the maximum load for all steps of a polynomially large time interval. This matches our lower bound up to multiplicative constants. - For any m ⩾ n, our analysis also implies an 𝒪(m²/n) waiting time to reach a configuration with a 𝒪(m/n ⋅ log m) maximum load, even for worst-case initial distributions. - For m ⩾ n, we show that every ball visits every bin in 𝒪(m log m) rounds. For m = n, this improves the previous upper bound of 𝒪(n log² n) in [Luca Becchetti et al., 2019]. We also prove that the upper bound is tight up to multiplicative constants for any n ⩽ m ⩽ poly(n).

Cite as

Dimitrios Los and Thomas Sauerwald. Tight Bounds for Repeated Balls-Into-Bins. In 40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 254, pp. 45:1-45:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{los_et_al:LIPIcs.STACS.2023.45,
  author =	{Los, Dimitrios and Sauerwald, Thomas},
  title =	{{Tight Bounds for Repeated Balls-Into-Bins}},
  booktitle =	{40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023)},
  pages =	{45:1--45:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-266-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{254},
  editor =	{Berenbrink, Petra and Bouyer, Patricia and Dawar, Anuj and Kant\'{e}, Mamadou Moustapha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2023.45},
  URN =		{urn:nbn:de:0030-drops-176975},
  doi =		{10.4230/LIPIcs.STACS.2023.45},
  annote =	{Keywords: Repeated balls-into-bins, self-stabilizing systems, balanced allocations, potential functions, random walks}
}
Document
Balanced Allocations with Incomplete Information: The Power of Two Queries

Authors: Dimitrios Los and Thomas Sauerwald

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
We consider the allocation of m balls into n bins with incomplete information. In the classical Two-Choice process a ball first queries the load of two randomly chosen bins and is then placed in the least loaded bin. In our setting, each ball also samples two random bins but can only estimate a bin’s load by sending binary queries of the form "Is the load at least the median?" or "Is the load at least 100?". For the lightly loaded case m = 𝒪(n), Feldheim and Gurel-Gurevich (2021) showed that with one query it is possible to achieve a maximum load of 𝒪(√{log n/log log n}), and they also pose the question whether a maximum load of m/n+𝒪(√{log n/log log n}) is possible for any m = Ω(n). In this work, we resolve this open problem by proving a lower bound of m/n+Ω(√{log n}) for a fixed m = Θ(n √{log n}), and a lower bound of m/n+Ω(log n/log log n) for some m depending on the used strategy. We complement this negative result by proving a positive result for multiple queries. In particular, we show that with only two binary queries per chosen bin, there is an oblivious strategy which ensures a maximum load of m/n+𝒪(√{log n}) for any m ≥ 1. Further, for any number of k = 𝒪(log log n) binary queries, the upper bound on the maximum load improves to m/n + 𝒪(k(log n)^{1/k}) for any m ≥ 1. This result for k queries has several interesting consequences: (i) it implies new bounds for the (1+β)-process introduced by Peres, Talwar and Wieder (2015), (ii) it leads to new bounds for the graphical balanced allocation process on dense expander graphs, and (iii) it recovers and generalizes the bound of m/n+𝒪(log log n) on the maximum load achieved by the Two-Choice process, including the heavily loaded case m = Ω(n) which was derived in previous works by Berenbrink et al. (2006) as well as Talwar and Wieder (2014). One novel aspect of our proofs is the use of multiple super-exponential potential functions, which might be of use in future work.

Cite as

Dimitrios Los and Thomas Sauerwald. Balanced Allocations with Incomplete Information: The Power of Two Queries. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 103:1-103:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{los_et_al:LIPIcs.ITCS.2022.103,
  author =	{Los, Dimitrios and Sauerwald, Thomas},
  title =	{{Balanced Allocations with Incomplete Information: The Power of Two Queries}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{103:1--103:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.103},
  URN =		{urn:nbn:de:0030-drops-156994},
  doi =		{10.4230/LIPIcs.ITCS.2022.103},
  annote =	{Keywords: power-of-two-choices, balanced allocations, potential functions, thinning}
}
Document
Low Diameter Graph Decompositions by Approximate Distance Computation

Authors: Ruben Becker, Yuval Emek, and Christoph Lenzen

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
In many models for large-scale computation, decomposition of the problem is key to efficient algorithms. For distance-related graph problems, it is often crucial that such a decomposition results in clusters of small diameter, while the probability that an edge is cut by the decomposition scales linearly with the length of the edge. There is a large body of literature on low diameter graph decomposition with small edge cutting probabilities, with all existing techniques heavily building on single source shortest paths (SSSP) computations. Unfortunately, in many theoretical models for large-scale computations, the SSSP task constitutes a complexity bottleneck. Therefore, it is desirable to replace exact SSSP computations with approximate ones. However this imposes a fundamental challenge since the existing constructions of low diameter graph decomposition with small edge cutting probabilities inherently rely on the subtractive form of the triangle inequality, which fails to hold under distance approximation. The current paper overcomes this obstacle by developing a technique termed blurry ball growing. By combining this technique with a clever algorithmic idea of Miller et al. (SPAA 2013), we obtain a construction of low diameter decompositions with small edge cutting probabilities which replaces exact SSSP computations by (a small number of) approximate ones. The utility of our approach is showcased by deriving efficient algorithms that work in the CONGEST, PRAM, and semi-streaming models of computation. As an application, we obtain metric tree embedding algorithms in the vein of Bartal (FOCS 1996) whose computational complexities in these models are optimal up to polylogarithmic factors. Our embeddings have the additional useful property that the tree can be mapped back to the original graph such that each edge is "used" only logaritmically many times, which is of interest for capacitated problems and simulating CONGEST algorithms on the tree into which the graph is embedded.

Cite as

Ruben Becker, Yuval Emek, and Christoph Lenzen. Low Diameter Graph Decompositions by Approximate Distance Computation. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 50:1-50:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{becker_et_al:LIPIcs.ITCS.2020.50,
  author =	{Becker, Ruben and Emek, Yuval and Lenzen, Christoph},
  title =	{{Low Diameter Graph Decompositions by Approximate Distance Computation}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{50:1--50:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.50},
  URN =		{urn:nbn:de:0030-drops-117355},
  doi =		{10.4230/LIPIcs.ITCS.2020.50},
  annote =	{Keywords: graph decompositions, metric tree embeddings, distributed graph algorithms, parallel graph algorithms, (semi-)streaming graph algorithms}
}
  • Refine by Author
  • 6 Ball, Thomas
  • 4 Giesl, Jürgen
  • 3 Ball, Marshall
  • 3 Krishnamurthi, Shriram
  • 3 Sauerwald, Thomas
  • Show More...

  • Refine by Classification
  • 3 Theory of computation → Design and analysis of algorithms
  • 2 Mathematics of computing → Discrete mathematics
  • 2 Mathematics of computing → Probability and statistics
  • 2 Software and its engineering → Concurrent programming languages
  • 2 Software and its engineering → Formal software verification
  • Show More...

  • Refine by Keyword
  • 5 verification
  • 3 gradual typing
  • 2 Artificial Intelligence
  • 2 Deduction
  • 2 Formal Logic
  • Show More...

  • Refine by Type
  • 53 document
  • 1 volume

  • Refine by Publication Year
  • 26 2015
  • 10 2024
  • 5 2010
  • 4 2020
  • 2 2005
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail