14 Search Results for "Block, Alexander R."


Document
Storylines with a Protagonist

Authors: Tim Hegemann and Alexander Wolff

Published in: LIPIcs, Volume 320, 32nd International Symposium on Graph Drawing and Network Visualization (GD 2024)


Abstract
Storyline visualizations show interactions between a given set of characters over time. Each character is represented by an x-monotone curve. A meeting is represented by a vertical bar that is crossed by the curves of exactly those characters that participate in the meeting. Therefore, character curves may have to cross each other. In the context of publication networks, we consider storylines where the characters are authors and the meetings are joint publications. We are especially interested in visualizing a group of colleagues centered around an author, the protagonist, who participates in all selected publications. For such instances, we propose a drawing style where the protagonist’s curve is drawn at a prominent position and never crossed by any other author’s curve. We consider two variants of storylines with a protagonist. In the one-sided variant, the protagonist is required to be drawn at the top position. In this restricted setting, we can efficiently compute a drawing with the minimum number of pairwise crossings, whereas we show that it is NP-hard to minimize the number of block crossings (i.e., pairs of blocks of parallel curves that intersect each other). In the two-sided variant, the task is to split the set of co-authors of the protagonist into two groups, and to place the curves of one group above and the curves of the other group below the protagonist’s curve such that the total number of (block) crossings is minimized. As our main result, we present an algorithm for bundling a sequence of pairwise crossings into a sequence of few block crossings (in the absence of meetings). It exploits a connection to a rectangle dissection problem. In the presence of meetings, it yields results that are very close to a lower bound. Based on this bundling algorithm and our exact algorithm for the one-sided variant, we present a new heuristic for computing two-sided storylines with few block crossings. We perform an extensive experimental study using publication data of 81 protagonists from GD 2023 and their most frequent collaborators over the last ten years. Our study shows that, for two-sided storylines with a protagonist, our new heuristic uses fewer block crossings (and fewer pairwise crossings) than two heuristics for block crossing minimization in general storylines.

Cite as

Tim Hegemann and Alexander Wolff. Storylines with a Protagonist. In 32nd International Symposium on Graph Drawing and Network Visualization (GD 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 320, pp. 26:1-26:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hegemann_et_al:LIPIcs.GD.2024.26,
  author =	{Hegemann, Tim and Wolff, Alexander},
  title =	{{Storylines with a Protagonist}},
  booktitle =	{32nd International Symposium on Graph Drawing and Network Visualization (GD 2024)},
  pages =	{26:1--26:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-343-0},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{320},
  editor =	{Felsner, Stefan and Klein, Karsten},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GD.2024.26},
  URN =		{urn:nbn:de:0030-drops-213109},
  doi =		{10.4230/LIPIcs.GD.2024.26},
  annote =	{Keywords: Storyline visualization, storyline with a protagonist, crossing minimization, block crossings}
}
Document
Lifting Dichotomies

Authors: Yaroslav Alekseev, Yuval Filmus, and Alexander Smal

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Lifting theorems are used for transferring lower bounds between Boolean function complexity measures. Given a lower bound on a complexity measure A for some function f, we compose f with a carefully chosen gadget function g and get essentially the same lower bound on a complexity measure B for the lifted function f ⋄ g. Lifting theorems have a number of applications in many different areas such as circuit complexity, communication complexity, proof complexity, etc. One of the main question in the context of lifting is how to choose a suitable gadget g. Generally, to get better results, i.e., to minimize the losses when transferring lower bounds, we need the gadget to be of a constant size (number of inputs). Unfortunately, in many settings we know lifting results only for gadgets of size that grows with the size of f, and it is unclear whether it can be improved to a constant size gadget. This motivates us to identify the properties of gadgets that make lifting possible. In this paper, we systematically study the question "For which gadgets does the lifting result hold?" in the following four settings: lifting from decision tree depth to decision tree size, lifting from conjunction DAG width to conjunction DAG size, lifting from decision tree depth to parity decision tree depth and size, and lifting from block sensitivity to deterministic and randomized communication complexities. In all the cases, we prove the complete classification of gadgets by exposing the properties of gadgets that make lifting results hold. The structure of the results shows that there is no intermediate cases - for every gadget there is either a polynomial lifting or no lifting at all. As a byproduct of our studies, we prove the log-rank conjecture for the class of functions that can be represented as f ⋄ OR ⋄ XOR for some function f. In this extended abstract, the proofs are omitted. Full proofs are given in the full version [Yaroslav Alekseev et al., 2024].

Cite as

Yaroslav Alekseev, Yuval Filmus, and Alexander Smal. Lifting Dichotomies. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 9:1-9:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{alekseev_et_al:LIPIcs.CCC.2024.9,
  author =	{Alekseev, Yaroslav and Filmus, Yuval and Smal, Alexander},
  title =	{{Lifting Dichotomies}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{9:1--9:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.9},
  URN =		{urn:nbn:de:0030-drops-204051},
  doi =		{10.4230/LIPIcs.CCC.2024.9},
  annote =	{Keywords: decision trees, log-rank conjecture, lifting, parity decision trees}
}
Document
Track A: Algorithms, Complexity and Games
Learning Low-Degree Quantum Objects

Authors: Srinivasan Arunachalam, Arkopal Dutt, Francisco Escudero Gutiérrez, and Carlos Palazuelos

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We consider the problem of learning low-degree quantum objects up to ε-error in 𝓁₂-distance. We show the following results: (i) unknown n-qubit degree-d (in the Pauli basis) quantum channels and unitaries can be learned using O(1/ε^d) queries (which is independent of n), (ii) polynomials p:{-1,1}ⁿ → [-1,1] arising from d-query quantum algorithms can be learned from O((1/ε)^d ⋅ log n) many random examples (x,p(x)) (which implies learnability even for d = O(log n)), and (iii) degree-d polynomials p:{-1,1}ⁿ → [-1,1] can be learned through O(1/ε^d) queries to a quantum unitary U_p that block-encodes p. Our main technical contributions are new Bohnenblust-Hille inequalities for quantum channels and completely bounded polynomials.

Cite as

Srinivasan Arunachalam, Arkopal Dutt, Francisco Escudero Gutiérrez, and Carlos Palazuelos. Learning Low-Degree Quantum Objects. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 13:1-13:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{arunachalam_et_al:LIPIcs.ICALP.2024.13,
  author =	{Arunachalam, Srinivasan and Dutt, Arkopal and Escudero Guti\'{e}rrez, Francisco and Palazuelos, Carlos},
  title =	{{Learning Low-Degree Quantum Objects}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{13:1--13:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.13},
  URN =		{urn:nbn:de:0030-drops-201563},
  doi =		{10.4230/LIPIcs.ICALP.2024.13},
  annote =	{Keywords: Tomography}
}
Document
Detecting Causality in the Presence of Byzantine Processes: The Synchronous Systems Case

Authors: Anshuman Misra and Ajay D. Kshemkalyani

Published in: LIPIcs, Volume 278, 30th International Symposium on Temporal Representation and Reasoning (TIME 2023)


Abstract
Detecting causality or the happens before relation between events in a distributed system is a fundamental building block for distributed applications. It was recently proved that this problem cannot be solved in an asynchronous distributed system in the presence of Byzantine processes, irrespective of whether the communication mechanism is via unicasts, multicasts, or broadcasts. In light of this impossibility result, we turn attention to synchronous systems and examine the possibility of solving the causality detection problem in such systems. In this paper, we prove that causality detection between events can be solved in the presence of Byzantine processes in a synchronous distributed system. The positive result holds for unicast, multicast, as well as broadcast modes of communication. We prove the result by providing an algorithm. Our solution uses the Replicated State Machine (RSM) approach and vector clocks.

Cite as

Anshuman Misra and Ajay D. Kshemkalyani. Detecting Causality in the Presence of Byzantine Processes: The Synchronous Systems Case. In 30th International Symposium on Temporal Representation and Reasoning (TIME 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 278, pp. 11:1-11:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{misra_et_al:LIPIcs.TIME.2023.11,
  author =	{Misra, Anshuman and Kshemkalyani, Ajay D.},
  title =	{{Detecting Causality in the Presence of Byzantine Processes: The Synchronous Systems Case}},
  booktitle =	{30th International Symposium on Temporal Representation and Reasoning (TIME 2023)},
  pages =	{11:1--11:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-298-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{278},
  editor =	{Artikis, Alexander and Bruse, Florian and Hunsberger, Luke},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TIME.2023.11},
  URN =		{urn:nbn:de:0030-drops-191017},
  doi =		{10.4230/LIPIcs.TIME.2023.11},
  annote =	{Keywords: Byzantine fault-tolerance, causality, happens before, distributed system, message-passing, synchronous system}
}
Document
Extended Abstract
Answer Set Automata: A Learnable Pattern Specification Framework for Complex Event Recognition (Extended Abstract)

Authors: Nikos Katzouris and Georgios Paliouras

Published in: LIPIcs, Volume 278, 30th International Symposium on Temporal Representation and Reasoning (TIME 2023)


Abstract
Complex Event Recognition (CER) systems detect event occurrences in streaming input using predefined event patterns. Techniques that learn event patterns from data are highly desirable in CER. Since such patterns are typically represented by symbolic automata, we propose a family of such automata where the transition-enabling conditions are defined by Answer Set Programming (ASP) rules, and which, thanks to the strong connections of ASP to symbolic learning, are learnable from data. We present such a learning approach in ASP, capable of jointly learning the structure of an automaton and its transition guards' definitions from building-block predicates, and a scalable, incremental version thereof that progressively revises models learnt from mini-batches using Monte Carlo Tree Search. We evaluate our approach on three CER datasets and empirically demonstrate its efficacy.

Cite as

Nikos Katzouris and Georgios Paliouras. Answer Set Automata: A Learnable Pattern Specification Framework for Complex Event Recognition (Extended Abstract). In 30th International Symposium on Temporal Representation and Reasoning (TIME 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 278, pp. 17:1-17:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{katzouris_et_al:LIPIcs.TIME.2023.17,
  author =	{Katzouris, Nikos and Paliouras, Georgios},
  title =	{{Answer Set Automata: A Learnable Pattern Specification Framework for Complex Event Recognition}},
  booktitle =	{30th International Symposium on Temporal Representation and Reasoning (TIME 2023)},
  pages =	{17:1--17:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-298-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{278},
  editor =	{Artikis, Alexander and Bruse, Florian and Hunsberger, Luke},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TIME.2023.17},
  URN =		{urn:nbn:de:0030-drops-191071},
  doi =		{10.4230/LIPIcs.TIME.2023.17},
  annote =	{Keywords: Event Pattern Learning, Answer Set Programming}
}
Document
A Graph-Theoretic Formulation of Exploratory Blockmodeling

Authors: Alexander Bille, Niels Grüttemeier, Christian Komusiewicz, and Nils Morawietz

Published in: LIPIcs, Volume 265, 21st International Symposium on Experimental Algorithms (SEA 2023)


Abstract
We present a new simple graph-theoretic formulation of the exploratory blockmodeling problem on undirected and unweighted one-mode networks. Our formulation takes as input the network G and the maximum number t of blocks for the solution model. The task is to find a minimum-size set of edge insertions and deletions that transform the input graph G into a graph G' with at most t neighborhood classes. Herein, a neighborhood class is a maximal set of vertices with the same neighborhood. The neighborhood classes of G' directly give the blocks and block interactions of the computed blockmodel. We analyze the classic and parameterized complexity of the exploratory blockmodeling problem, provide a branch-and-bound algorithm, an ILP formulation and several heuristics. Finally, we compare our exact algorithms to previous ILP-based approaches and show that the new algorithms are faster for t ≥ 4.

Cite as

Alexander Bille, Niels Grüttemeier, Christian Komusiewicz, and Nils Morawietz. A Graph-Theoretic Formulation of Exploratory Blockmodeling. In 21st International Symposium on Experimental Algorithms (SEA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 265, pp. 14:1-14:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bille_et_al:LIPIcs.SEA.2023.14,
  author =	{Bille, Alexander and Gr\"{u}ttemeier, Niels and Komusiewicz, Christian and Morawietz, Nils},
  title =	{{A Graph-Theoretic Formulation of Exploratory Blockmodeling}},
  booktitle =	{21st International Symposium on Experimental Algorithms (SEA 2023)},
  pages =	{14:1--14:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-279-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{265},
  editor =	{Georgiadis, Loukas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2023.14},
  URN =		{urn:nbn:de:0030-drops-183648},
  doi =		{10.4230/LIPIcs.SEA.2023.14},
  annote =	{Keywords: Clustering, Exact Algorithms, ILP-Formulation, Branch-and-Bound, Social Networks}
}
Document
On Relaxed Locally Decodable Codes for Hamming and Insertion-Deletion Errors

Authors: Alexander R. Block, Jeremiah Blocki, Kuan Cheng, Elena Grigorescu, Xin Li, Yu Zheng, and Minshen Zhu

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
Locally Decodable Codes (LDCs) are error-correcting codes C:Σⁿ → Σ^m, encoding messages in Σⁿ to codewords in Σ^m, with super-fast decoding algorithms. They are important mathematical objects in many areas of theoretical computer science, yet the best constructions so far have codeword length m that is super-polynomial in n, for codes with constant query complexity and constant alphabet size. In a very surprising result, Ben-Sasson, Goldreich, Harsha, Sudan, and Vadhan (SICOMP 2006) show how to construct a relaxed version of LDCs (RLDCs) with constant query complexity and almost linear codeword length over the binary alphabet, and used them to obtain significantly-improved constructions of Probabilistically Checkable Proofs. In this work, we study RLDCs in the standard Hamming-error setting, and introduce their variants in the insertion and deletion (Insdel) error setting. Standard LDCs for Insdel errors were first studied by Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015), and are further motivated by recent advances in DNA random access bio-technologies. Our first result is an exponential lower bound on the length of Hamming RLDCs making 2 queries (even adaptively), over the binary alphabet. This answers a question explicitly raised by Gur and Lachish (SICOMP 2021) and is the first exponential lower bound for RLDCs. Combined with the results of Ben-Sasson et al., our result exhibits a "phase-transition"-type behavior on the codeword length for some constant-query complexity. We achieve these lower bounds via a transformation of RLDCs to standard Hamming LDCs, using a careful analysis of restrictions of message bits that fix codeword bits. We further define two variants of RLDCs in the Insdel-error setting, a weak and a strong version. On the one hand, we construct weak Insdel RLDCs with almost linear codeword length and constant query complexity, matching the parameters of the Hamming variants. On the other hand, we prove exponential lower bounds for strong Insdel RLDCs. These results demonstrate that, while these variants are equivalent in the Hamming setting, they are significantly different in the insdel setting. Our results also prove a strict separation between Hamming RLDCs and Insdel RLDCs.

Cite as

Alexander R. Block, Jeremiah Blocki, Kuan Cheng, Elena Grigorescu, Xin Li, Yu Zheng, and Minshen Zhu. On Relaxed Locally Decodable Codes for Hamming and Insertion-Deletion Errors. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 14:1-14:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{block_et_al:LIPIcs.CCC.2023.14,
  author =	{Block, Alexander R. and Blocki, Jeremiah and Cheng, Kuan and Grigorescu, Elena and Li, Xin and Zheng, Yu and Zhu, Minshen},
  title =	{{On Relaxed Locally Decodable Codes for Hamming and Insertion-Deletion Errors}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{14:1--14:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.14},
  URN =		{urn:nbn:de:0030-drops-182847},
  doi =		{10.4230/LIPIcs.CCC.2023.14},
  annote =	{Keywords: Relaxed Locally Decodable Codes, Hamming Errors, Insdel Errors, Lower Bounds}
}
Document
A Fast Data Structure for Dynamic Graphs Based on Hash-Indexed Adjacency Blocks

Authors: Alexander van der Grinten, Maria Predari, and Florian Willich

Published in: LIPIcs, Volume 233, 20th International Symposium on Experimental Algorithms (SEA 2022)


Abstract
Several dynamic graph data structures have been proposed in literature. Yet, these data structures either offer limited support for arbitrary graph algorithms or they are designed as part of specific frameworks (e.g., for GPUs or specialized hardware). Such frameworks are difficult to adopt to arbitrary graph computations and lead practitioners to fall back to less sophisticated solutions when dealing with dynamic graphs. In this work, we propose a new "dynamic hashed blocks" (DHB) data structure for sparse dynamic graphs and matrices on general-purpose CPU architectures. DHB combines an efficient block-based memory layout to store incident edges with an additional per-vertex hash index for high degree vertices. This hash index allows us to quickly insert edges without introducing duplicates, while the block-based memory layout retains advantageous cache locality properties of traditional adjacency arrays. Experiments show that DHB outperforms competing dynamic graph structures for edge insertions, updates, deletions, and traversal operations. Compared to static CSR layouts, DHB exhibits only a small overhead in traversal performance. DHB’s interface is similar to general-purpose abstract graph data types and can be easily used as a drop-in replacement for traditional adjacency arrays. To demonstrate that, we modify the well-known NetworKit framework to use DHB instead of its own dynamic graph representation. Experiments show that this modification only slightly penalizes the performance of graph algorithms while considerably boosting update rates.

Cite as

Alexander van der Grinten, Maria Predari, and Florian Willich. A Fast Data Structure for Dynamic Graphs Based on Hash-Indexed Adjacency Blocks. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 11:1-11:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{vandergrinten_et_al:LIPIcs.SEA.2022.11,
  author =	{van der Grinten, Alexander and Predari, Maria and Willich, Florian},
  title =	{{A Fast Data Structure for Dynamic Graphs Based on Hash-Indexed Adjacency Blocks}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{11:1--11:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.11},
  URN =		{urn:nbn:de:0030-drops-165453},
  doi =		{10.4230/LIPIcs.SEA.2022.11},
  annote =	{Keywords: dynamic graph data structures, sparse matrix layout, dynamic algorithms, parallel algorithms, graph analysis}
}
Document
P₄-free Partition and Cover Numbers & Applications

Authors: Alexander R. Block, Simina Brânzei, Hemanta K. Maji, Himanshi Mehta, Tamalika Mukherjee, and Hai H. Nguyen

Published in: LIPIcs, Volume 199, 2nd Conference on Information-Theoretic Cryptography (ITC 2021)


Abstract
P₄-free graphs- also known as cographs, complement-reducible graphs, or hereditary Dacey graphs-have been well studied in graph theory. Motivated by computer science and information theory applications, our work encodes (flat) joint probability distributions and Boolean functions as bipartite graphs and studies bipartite P₄-free graphs. For these applications, the graph properties of edge partitioning and covering a bipartite graph using the minimum number of these graphs are particularly relevant. Previously, such graph properties have appeared in leakage-resilient cryptography and (variants of) coloring problems. Interestingly, our covering problem is closely related to the well-studied problem of product (a.k.a., Prague) dimension of loopless undirected graphs, which allows us to employ algebraic lower-bounding techniques for the product/Prague dimension. We prove that computing these numbers is NP-complete, even for bipartite graphs. We establish a connection to the (unsolved) Zarankiewicz problem to show that there are bipartite graphs with size-N partite sets such that these numbers are at least ε⋅N^{1-2ε}, for ε ∈ {1/3,1/4,1/5,...}. Finally, we accurately estimate these numbers for bipartite graphs encoding well-studied Boolean functions from circuit complexity, such as set intersection, set disjointness, and inequality. For applications in information theory and communication & cryptographic complexity, we consider a system where a setup samples from a (flat) joint distribution and gives the participants, Alice and Bob, their portion from this joint sample. Alice and Bob’s objective is to non-interactively establish a shared key and extract the left-over entropy from their portion of the samples as independent private randomness. A genie, who observes the joint sample, provides appropriate assistance to help Alice and Bob with their objective. Lower bounds to the minimum size of the genie’s assistance translate into communication and cryptographic lower bounds. We show that (the log₂ of) the P₄-free partition number of a graph encoding the joint distribution that the setup uses is equivalent to the size of the genie’s assistance. Consequently, the joint distributions corresponding to the bipartite graphs constructed above with high P₄-free partition numbers correspond to joint distributions requiring more assistance from the genie. As a representative application in non-deterministic communication complexity, we study the communication complexity of nondeterministic protocols augmented by access to the equality oracle at the output. We show that (the log₂ of) the P₄-free cover number of the bipartite graph encoding a Boolean function f is equivalent to the minimum size of the nondeterministic input required by the parties (referred to as the communication complexity of f in this model). Consequently, the functions corresponding to the bipartite graphs with high P₄-free cover numbers have high communication complexity. Furthermore, there are functions with communication complexity close to the naïve protocol where the nondeterministic input reveals a party’s input. Finally, the access to the equality oracle reduces the communication complexity of computing set disjointness by a constant factor in contrast to the model where parties do not have access to the equality oracle. To compute the inequality function, we show an exponential reduction in the communication complexity, and this bound is optimal. On the other hand, access to the equality oracle is (nearly) useless for computing set intersection.

Cite as

Alexander R. Block, Simina Brânzei, Hemanta K. Maji, Himanshi Mehta, Tamalika Mukherjee, and Hai H. Nguyen. P₄-free Partition and Cover Numbers & Applications. In 2nd Conference on Information-Theoretic Cryptography (ITC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 199, pp. 16:1-16:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{block_et_al:LIPIcs.ITC.2021.16,
  author =	{Block, Alexander R. and Br\^{a}nzei, Simina and Maji, Hemanta K. and Mehta, Himanshi and Mukherjee, Tamalika and Nguyen, Hai H.},
  title =	{{P₄-free Partition and Cover Numbers \& Applications}},
  booktitle =	{2nd Conference on Information-Theoretic Cryptography (ITC 2021)},
  pages =	{16:1--16:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-197-9},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{199},
  editor =	{Tessaro, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2021.16},
  URN =		{urn:nbn:de:0030-drops-143357},
  doi =		{10.4230/LIPIcs.ITC.2021.16},
  annote =	{Keywords: Secure keys, Secure private randomness, Gray-Wyner system, Cryptographic complexity, Nondeterministic communication complexity, Leakage-resilience, Combinatorial optimization, Product dimension, Zarankiewicz problem, Algebraic lower-bounding techniques, P₄-free partition number, P₄-free cover number}
}
Document
Invited Talk
Byzantine Agreement and SMR with Sub-Quadratic Message Complexity (Invited Talk)

Authors: Idit Keidar

Published in: LIPIcs, Volume 184, 24th International Conference on Principles of Distributed Systems (OPODIS 2020)


Abstract
Byzantine Agreement (BA) has been studied for four decades by now, but until recently, has been considered at a fairly small scale. In recent years, however, we begin to see practical use-cases of BA in large-scale systems, which motivates a push for reduced communication complexity. Dolev and Reischuk’s well-known lower bound stipulates that any deterministic algorithm requires Ω(n²) communication in the worst-case, and until fairly recently, almost all randomized algorithms have had at least quadratic complexity as well. This talk will present two new algorithms breaking this barrier. The first part of the talk will consider a fully asynchronous setting, focusing on randomized BA whose safety and liveness guarantees hold with high probability. It will present the first asynchronous Byzantine Agreement algorithm with sub-quadratic communication complexity. This algorithm exploits VRF-based committee sampling, which it adapts for the asynchronous model. The second part of the talk will consider the eventually synchronous model, where BA and State Machine Replication (SMR) can be solved with deterministic safety and liveness guarantees. In this context, randomization is used in order to reduce the expected communication complexity. The talk will present an algorithm for round synchronization, which is a building block for BA and SMR and constitutes the main performance bottleneck therein. It will present an algorithm that, for the first time, achieves round synchronization with expected linear message complexity and expected constant latency. Existing protocols can use this round synchronization algorithm to solve Byzantine SMR with the same asymptotic performance. The first part of the talk is based on joint work with Shir Cohen and Alexander Spiegelman, and the second part of the talk is based on joint work with Oded Naor.

Cite as

Idit Keidar. Byzantine Agreement and SMR with Sub-Quadratic Message Complexity (Invited Talk). In 24th International Conference on Principles of Distributed Systems (OPODIS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 184, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{keidar:LIPIcs.OPODIS.2020.2,
  author =	{Keidar, Idit},
  title =	{{Byzantine Agreement and SMR with Sub-Quadratic Message Complexity}},
  booktitle =	{24th International Conference on Principles of Distributed Systems (OPODIS 2020)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-176-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{184},
  editor =	{Bramas, Quentin and Oshman, Rotem and Romano, Paolo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2020.2},
  URN =		{urn:nbn:de:0030-drops-134874},
  doi =		{10.4230/LIPIcs.OPODIS.2020.2},
  annote =	{Keywords: Distributed Computing, Byzantine Agreement}
}
Document
ACE: Abstract Consensus Encapsulation for Liveness Boosting of State Machine Replication

Authors: Alexander Spiegelman, Arik Rinberg, and Dahlia Malkhi

Published in: LIPIcs, Volume 184, 24th International Conference on Principles of Distributed Systems (OPODIS 2020)


Abstract
With the emergence of attack-prone cross-organization systems, providing asynchronous state machine replication (SMR) solutions is no longer a theoretical concern. This paper presents ACE, a framework for the design of such fault tolerant systems. Leveraging a known paradigm for randomized consensus solutions, ACE wraps existing practical solutions and real-life systems, boosting their liveness under adversarial conditions and, at the same time, promoting load balancing and fairness. Boosting is achieved without modifying the overall design or the engineering of these solutions. ACE is aimed at boosting the prevailing approach for practical fault tolerance. This approach, often named partial synchrony, is based on a leader-based paradigm: a good leader makes progress and a bad leader does no harm. The partial synchrony approach focuses on safety and forgoes liveness under targeted and dynamic attacks. Specifically, an attacker might block specific leaders, e.g., through a denial of service, to prevent progress. ACE provides boosting by running waves of parallel leaders and selecting a winning leader only retroactively, achieving boosting at a linear communication cost increase. ACE is agnostic to the fault model, inheriting it s failure model from the wrapped solution assumptions. As our evaluation shows, an asynchronous Byzantine fault tolerance (BFT) replication system built with ACE around an existing partially synchronous BFT protocol demonstrates reasonable slow-down compared with the base BFT protocol during faultless synchronous scenarios, yet exhibits significant speedup while the system is under attack.

Cite as

Alexander Spiegelman, Arik Rinberg, and Dahlia Malkhi. ACE: Abstract Consensus Encapsulation for Liveness Boosting of State Machine Replication. In 24th International Conference on Principles of Distributed Systems (OPODIS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 184, pp. 9:1-9:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{spiegelman_et_al:LIPIcs.OPODIS.2020.9,
  author =	{Spiegelman, Alexander and Rinberg, Arik and Malkhi, Dahlia},
  title =	{{ACE: Abstract Consensus Encapsulation for Liveness Boosting of State Machine Replication}},
  booktitle =	{24th International Conference on Principles of Distributed Systems (OPODIS 2020)},
  pages =	{9:1--9:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-176-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{184},
  editor =	{Bramas, Quentin and Oshman, Rotem and Romano, Paolo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2020.9},
  URN =		{urn:nbn:de:0030-drops-134948},
  doi =		{10.4230/LIPIcs.OPODIS.2020.9},
  annote =	{Keywords: Framework, Asynchronous, Consensus boosting, State Machine Replication}
}
Document
Locally Decodable/Correctable Codes for Insertions and Deletions

Authors: Alexander R. Block, Jeremiah Blocki, Elena Grigorescu, Shubhang Kulkarni, and Minshen Zhu

Published in: LIPIcs, Volume 182, 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020)


Abstract
Recent efforts in coding theory have focused on building codes for insertions and deletions, called insdel codes, with optimal trade-offs between their redundancy and their error-correction capabilities, as well as efficient encoding and decoding algorithms. In many applications, polynomial running time may still be prohibitively expensive, which has motivated the study of codes with super-efficient decoding algorithms. These have led to the well-studied notions of Locally Decodable Codes (LDCs) and Locally Correctable Codes (LCCs). Inspired by these notions, Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015) generalized Hamming LDCs to insertions and deletions. To the best of our knowledge, these are the only known results that study the analogues of Hamming LDCs in channels performing insertions and deletions. Here we continue the study of insdel codes that admit local algorithms. Specifically, we reprove the results of Ostrovsky and Paskin-Cherniavsky for insdel LDCs using a different set of techniques. We also observe that the techniques extend to constructions of LCCs. Specifically, we obtain insdel LDCs and LCCs from their Hamming LDCs and LCCs analogues, respectively. The rate and error-correction capability blow up only by a constant factor, while the query complexity blows up by a poly log factor in the block length. Since insdel locally decodable/correctble codes are scarcely studied in the literature, we believe our results and techniques may lead to further research. In particular, we conjecture that constant-query insdel LDCs/LCCs do not exist.

Cite as

Alexander R. Block, Jeremiah Blocki, Elena Grigorescu, Shubhang Kulkarni, and Minshen Zhu. Locally Decodable/Correctable Codes for Insertions and Deletions. In 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 182, pp. 16:1-16:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{block_et_al:LIPIcs.FSTTCS.2020.16,
  author =	{Block, Alexander R. and Blocki, Jeremiah and Grigorescu, Elena and Kulkarni, Shubhang and Zhu, Minshen},
  title =	{{Locally Decodable/Correctable Codes for Insertions and Deletions}},
  booktitle =	{40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020)},
  pages =	{16:1--16:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-174-0},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{182},
  editor =	{Saxena, Nitin and Simon, Sunil},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2020.16},
  URN =		{urn:nbn:de:0030-drops-132577},
  doi =		{10.4230/LIPIcs.FSTTCS.2020.16},
  annote =	{Keywords: Locally decodable/correctable codes, insert-delete channel}
}
Document
Resource-Bounded Kolmogorov Complexity Provides an Obstacle to Soficness of Multidimensional Shifts

Authors: Julien Destombes and Andrei Romashchenko

Published in: LIPIcs, Volume 126, 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)


Abstract
We suggest necessary conditions of soficness of multidimensional shifts formulated in terms of resource-bounded Kolmogorov complexity. Using this technique we provide examples of effective and non-sofic shifts on Z^2 with very low block complexity: the number of globally admissible patterns of size n x n grows only as a polynomial in n.

Cite as

Julien Destombes and Andrei Romashchenko. Resource-Bounded Kolmogorov Complexity Provides an Obstacle to Soficness of Multidimensional Shifts. In 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 126, pp. 23:1-23:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{destombes_et_al:LIPIcs.STACS.2019.23,
  author =	{Destombes, Julien and Romashchenko, Andrei},
  title =	{{Resource-Bounded Kolmogorov Complexity Provides an Obstacle to Soficness of Multidimensional Shifts}},
  booktitle =	{36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)},
  pages =	{23:1--23:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-100-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{126},
  editor =	{Niedermeier, Rolf and Paul, Christophe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2019.23},
  URN =		{urn:nbn:de:0030-drops-102624},
  doi =		{10.4230/LIPIcs.STACS.2019.23},
  annote =	{Keywords: Sofic shifts, Block complexity, Kolmogorov complexity}
}
Document
Continuous Non-Intrusive Hybrid WCET Estimation Using Waypoint Graphs

Authors: Boris Dreyer, Christian Hochberger, Alexander Lange, Simon Wegener, and Alexander Weiss

Published in: OASIcs, Volume 55, 16th International Workshop on Worst-Case Execution Time Analysis (WCET 2016)


Abstract
Traditionally, the Worst-Case Execution Time (WCET) of Embedded Software has been estimated using analytical approaches. This is effective, if good models of the processor/System-on-Chip (SoC) architecture exist. Unfortunately, modern high performance SoCs often contain unpredictable and/or undocumented components that influence the timing behaviour. Thus, analytical results for such processors are unrealistically pessimistic. One possible alternative approach seems to be hybrid WCET analysis, where measurement data together with an analytical approach is used to estimate worst-case behaviour. Previously, we demonstrated how continuous evaluation of basic block trace data can be used to produce detailed statistics of basic blocks in embedded software. In the meantime it has become clear that the trace data provided by modern SoCs delivers a different type of information. In this contribution, we show that even under realistic conditions, a meaningful analysis can be conducted with the trace data.

Cite as

Boris Dreyer, Christian Hochberger, Alexander Lange, Simon Wegener, and Alexander Weiss. Continuous Non-Intrusive Hybrid WCET Estimation Using Waypoint Graphs. In 16th International Workshop on Worst-Case Execution Time Analysis (WCET 2016). Open Access Series in Informatics (OASIcs), Volume 55, pp. 4:1-4:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{dreyer_et_al:OASIcs.WCET.2016.4,
  author =	{Dreyer, Boris and Hochberger, Christian and Lange, Alexander and Wegener, Simon and Weiss, Alexander},
  title =	{{Continuous Non-Intrusive Hybrid WCET Estimation Using Waypoint Graphs}},
  booktitle =	{16th International Workshop on Worst-Case Execution Time Analysis (WCET 2016)},
  pages =	{4:1--4:11},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-025-5},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{55},
  editor =	{Schoeberl, Martin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.WCET.2016.4},
  URN =		{urn:nbn:de:0030-drops-68977},
  doi =		{10.4230/OASIcs.WCET.2016.4},
  annote =	{Keywords: Hybrid Worst-Case Execution Time (WCET) Estimation for Multicore Processors, Real-time Systems}
}
  • Refine by Author
  • 3 Block, Alexander R.
  • 2 Blocki, Jeremiah
  • 2 Grigorescu, Elena
  • 2 Zhu, Minshen
  • 1 Alekseev, Yaroslav
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 1 Algebraic lower-bounding techniques
  • 1 Answer Set Programming
  • 1 Asynchronous
  • 1 Block complexity
  • 1 Branch-and-Bound
  • Show More...

  • Refine by Type
  • 14 document

  • Refine by Publication Year
  • 4 2023
  • 3 2021
  • 3 2024
  • 1 2016
  • 1 2019
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail