17 Search Results for "Wang, Chu"


Document
SoK: Attacks on DAOs

Authors: Rainer Feichtinger, Robin Fritsch, Lioba Heimbach, Yann Vonlanthen, and Roger Wattenhofer

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Decentralized Autonomous Organizations (DAOs) are blockchain-based organizations that facilitate decentralized governance. Today, DAOs not only hold billions of dollars in their treasury but also govern many of the most popular Decentralized Finance (DeFi) protocols. This paper systematically analyses security threats to DAOs, focusing on the types of attacks they face. We study attacks on DAOs that took place in the past, attacks that have been theorized to be possible, and potential attacks that were uncovered and prevented in audits. For each of these (potential) attacks, we describe and categorize the attack vectors utilized into four categories. This reveals that while many attacks on DAOs take advantage of the less tangible and more complex human nature involved in governance, audits tend to focus on code and protocol vulnerabilities. Thus, additionally, the paper examines empirical data on DAO vulnerabilities, outlines risk factors contributing to these attacks, and suggests mitigation strategies to safeguard against such vulnerabilities.

Cite as

Rainer Feichtinger, Robin Fritsch, Lioba Heimbach, Yann Vonlanthen, and Roger Wattenhofer. SoK: Attacks on DAOs. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 28:1-28:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{feichtinger_et_al:LIPIcs.AFT.2024.28,
  author =	{Feichtinger, Rainer and Fritsch, Robin and Heimbach, Lioba and Vonlanthen, Yann and Wattenhofer, Roger},
  title =	{{SoK: Attacks on DAOs}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{28:1--28:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.28},
  URN =		{urn:nbn:de:0030-drops-209640},
  doi =		{10.4230/LIPIcs.AFT.2024.28},
  annote =	{Keywords: blockchain, DAO, governance, security, measurements, voting systems}
}
Document
A Sound Type System for Secure Currency Flow

Authors: Luca Aceto, Daniele Gorla, and Stian Lybech

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
In this paper we focus on TinySol, a minimal calculus for Solidity smart contracts, introduced by Bartoletti et al. We start by rephrasing its syntax (to emphasise its object-oriented flavour) and give a new big-step operational semantics. We then use it to define two security properties, namely call integrity and noninterference. These two properties have some similarities in their definition, in that they both require that some part of a program is not influenced by the other part. However, we show that the two properties are actually incomparable. Nevertheless, we provide a type system for noninterference and show that well-typed programs satisfy call integrity as well; hence, programs that are accepted by our type system satisfy both properties. We finally discuss the practical usability of the type system and its limitations by means of some simple examples.

Cite as

Luca Aceto, Daniele Gorla, and Stian Lybech. A Sound Type System for Secure Currency Flow. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 1:1-1:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{aceto_et_al:LIPIcs.ECOOP.2024.1,
  author =	{Aceto, Luca and Gorla, Daniele and Lybech, Stian},
  title =	{{A Sound Type System for Secure Currency Flow}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{1:1--1:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.1},
  URN =		{urn:nbn:de:0030-drops-208508},
  doi =		{10.4230/LIPIcs.ECOOP.2024.1},
  annote =	{Keywords: smart contracts, call integrity, noninterference, type system}
}
Document
A Multi-Stage Proof Logging Framework to Certify the Correctness of CP Solvers

Authors: Maarten Flippo, Konstantin Sidorov, Imko Marijnissen, Jeff Smits, and Emir Demirović

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
Proof logging is used to increase trust in the optimality and unsatisfiability claims of solvers. However, to this date, no constraint programming solver can practically produce proofs without significantly impacting performance, which hinders mainstream adoption. We address this issue by introducing a novel proof generation framework, together with a CP proof format and proof checker. Our approach is to divide the proof generation into three steps. At runtime, we require the CP solver to only produce a proof sketch, which we call a scaffold. After the solving is done, our proof processor trims and expands the scaffold into a full CP proof, which is subsequently verified. Our framework is agnostic to the solver and the verification approach. Through MiniZinc benchmarks, we demonstrate that with our framework, the overhead of logging during solving is often less than 10%, significantly lower than other approaches, and that our proof processing step can reduce the overall size of the proof by orders of magnitude and by extension the proof checking time. Our results demonstrate that proof logging has the potential to become an integral part of the CP community.

Cite as

Maarten Flippo, Konstantin Sidorov, Imko Marijnissen, Jeff Smits, and Emir Demirović. A Multi-Stage Proof Logging Framework to Certify the Correctness of CP Solvers. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 11:1-11:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{flippo_et_al:LIPIcs.CP.2024.11,
  author =	{Flippo, Maarten and Sidorov, Konstantin and Marijnissen, Imko and Smits, Jeff and Demirovi\'{c}, Emir},
  title =	{{A Multi-Stage Proof Logging Framework to Certify the Correctness of CP Solvers}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{11:1--11:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.11},
  URN =		{urn:nbn:de:0030-drops-206969},
  doi =		{10.4230/LIPIcs.CP.2024.11},
  annote =	{Keywords: proof logging, formal verification, constraint programming}
}
Document
Structure-Guided Local Improvement for Maximum Satisfiability

Authors: André Schidler and Stefan Szeider

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
The enhanced performance of today’s MaxSAT solvers has elevated their appeal for many large-scale applications, notably in software analysis and computer-aided design. Our research delves into refining anytime MaxSAT solving by repeatedly identifying and solving with an exact solver smaller subinstances that are chosen based on the graphical structure of the instance. We investigate various strategies to pinpoint these subinstances. This structure-guided selection of subinstances provides an exact solver with a high potential for improving the current solution. Our exhaustive experimental analyses contrast our methodology as instantiated in our tool MaxSLIM with previous studies and benchmark it against leading-edge MaxSAT solvers.

Cite as

André Schidler and Stefan Szeider. Structure-Guided Local Improvement for Maximum Satisfiability. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 26:1-26:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{schidler_et_al:LIPIcs.CP.2024.26,
  author =	{Schidler, Andr\'{e} and Szeider, Stefan},
  title =	{{Structure-Guided Local Improvement for Maximum Satisfiability}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{26:1--26:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.26},
  URN =		{urn:nbn:de:0030-drops-207112},
  doi =		{10.4230/LIPIcs.CP.2024.26},
  annote =	{Keywords: maximum satisfiability, large neighborhood search (LNS), SAT-based local improvement (SLIM), incomplete MaxSAT, graphical structure, metaheuristic}
}
Document
Learning Precedences for Scheduling Problems with Graph Neural Networks

Authors: Hélène Verhaeghe, Quentin Cappart, Gilles Pesant, and Claude-Guy Quimper

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
The resource constrained project scheduling problem (RCPSP) consists of scheduling a finite set of resource-consuming tasks within a temporal horizon subject to resource capacities and precedence relations between pairs of tasks. It is NP-hard and many techniques have been introduced to improve the efficiency of CP solvers to solve it. The problem is naturally represented as a directed graph, commonly referred to as the precedence graph, by linking pairs of tasks subject to a precedence. In this paper, we propose to leverage the ability of graph neural networks to extract knowledge from precedence graphs. This is carried out by learning new precedences that can be used either to add new constraints or to design a dedicated variable-selection heuristic. Experiments carried out on RCPSP instances from PSPLIB show the potential of learning to predict precedences and how they can help speed up the search for solutions by a CP solver.

Cite as

Hélène Verhaeghe, Quentin Cappart, Gilles Pesant, and Claude-Guy Quimper. Learning Precedences for Scheduling Problems with Graph Neural Networks. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 30:1-30:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{verhaeghe_et_al:LIPIcs.CP.2024.30,
  author =	{Verhaeghe, H\'{e}l\`{e}ne and Cappart, Quentin and Pesant, Gilles and Quimper, Claude-Guy},
  title =	{{Learning Precedences for Scheduling Problems with Graph Neural Networks}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{30:1--30:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.30},
  URN =		{urn:nbn:de:0030-drops-207150},
  doi =		{10.4230/LIPIcs.CP.2024.30},
  annote =	{Keywords: Scheduling, Precedence graph, Graph neural network}
}
Document
Inverting Step-Reduced SHA-1 and MD5 by Parameterized SAT Solvers

Authors: Oleg Zaikin

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
MD5 and SHA-1 are fundamental cryptographic hash functions proposed in 1990s. Given a message of arbitrary finite size, MD5 produces a 128-bit hash in 64 steps, while SHA-1 produces a 160-bit hash in 80 steps. It is computationally infeasible to invert MD5 and SHA-1, i.e. to find a message given a hash. In 2012, 28-step MD5 and 23-step SHA-1 were inverted by CDCL solvers, yet no progress has been made since then. The present paper proposes to construct 31 intermediate inverse problems for any pair of MD5 or SHA-1 steps (i,i+1), such that the first problem is very close to inverting i steps, while the 31st one is almost inverting i+1 steps. We constructed SAT encodings of intermediate problems for MD5 and SHA-1, and tuned a CDCL solver on the simplest of them. Then the tuned solver was used to design a parallel Cube-and-Conquer solver which for the first time inverted 29-step MD5 and 24-step SHA-1.

Cite as

Oleg Zaikin. Inverting Step-Reduced SHA-1 and MD5 by Parameterized SAT Solvers. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 31:1-31:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{zaikin:LIPIcs.CP.2024.31,
  author =	{Zaikin, Oleg},
  title =	{{Inverting Step-Reduced SHA-1 and MD5 by Parameterized SAT Solvers}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{31:1--31:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.31},
  URN =		{urn:nbn:de:0030-drops-207165},
  doi =		{10.4230/LIPIcs.CP.2024.31},
  annote =	{Keywords: cryptographic hash function, MD5, SHA-1, preimage attack, SAT, Cube-and-Conquer}
}
Document
AlfaPang: Alignment Free Algorithm for Pangenome Graph Construction

Authors: Adam Cicherski, Anna Lisiecka, and Norbert Dojer

Published in: LIPIcs, Volume 312, 24th International Workshop on Algorithms in Bioinformatics (WABI 2024)


Abstract
The success of pangenome-based approaches to genomics analysis depends largely on the existence of efficient methods for constructing pangenome graphs that are applicable to large genome collections. In the current paper we present AlfaPang, a new pangenome graph building algorithm. AlfaPang is based on a novel alignment-free approach that allows to construct pangenome graphs using significantly less computational resources than state-of-the-art tools. The code of AlfaPang is freely available at https://github.com/AdamCicherski/AlfaPang.

Cite as

Adam Cicherski, Anna Lisiecka, and Norbert Dojer. AlfaPang: Alignment Free Algorithm for Pangenome Graph Construction. In 24th International Workshop on Algorithms in Bioinformatics (WABI 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 312, pp. 23:1-23:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cicherski_et_al:LIPIcs.WABI.2024.23,
  author =	{Cicherski, Adam and Lisiecka, Anna and Dojer, Norbert},
  title =	{{AlfaPang: Alignment Free Algorithm for Pangenome Graph Construction}},
  booktitle =	{24th International Workshop on Algorithms in Bioinformatics (WABI 2024)},
  pages =	{23:1--23:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-340-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{312},
  editor =	{Pissis, Solon P. and Sung, Wing-Kin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2024.23},
  URN =		{urn:nbn:de:0030-drops-206673},
  doi =		{10.4230/LIPIcs.WABI.2024.23},
  annote =	{Keywords: pangenome, variation graph, genome alignment, population genomics}
}
Document
Clausal Congruence Closure

Authors: Armin Biere, Katalin Fazekas, Mathias Fleury, and Nils Froleyks

Published in: LIPIcs, Volume 305, 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)


Abstract
Many practical applications of satisfiability solving employ multiple steps to encode an original problem formulation into conjunctive normal form. Often circuits are used as intermediate representation before encoding those circuits into clausal form. These circuits however might contain redundant isomorphic sub-circuits. If blindly translated into clausal form, this redundancy is retained and increases solving time unless specific preprocessing algorithms are used. Furthermore, such redundant sub-formula structure might only emerge during solving and needs to be addressed by inprocessing. This paper presents a new approach which extracts gate information from the formula and applies congruence closure to match and eliminate redundant gates. Besides new algorithms for gate extraction, we also describe previous unpublished attempts to tackle this problem. Experiments focus on the important problem of combinational equivalence checking for hardware designs and show that our new approach yields a substantial gain in CNF solver performance.

Cite as

Armin Biere, Katalin Fazekas, Mathias Fleury, and Nils Froleyks. Clausal Congruence Closure. In 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 305, pp. 6:1-6:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{biere_et_al:LIPIcs.SAT.2024.6,
  author =	{Biere, Armin and Fazekas, Katalin and Fleury, Mathias and Froleyks, Nils},
  title =	{{Clausal Congruence Closure}},
  booktitle =	{27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)},
  pages =	{6:1--6:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-334-8},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{305},
  editor =	{Chakraborty, Supratik and Jiang, Jie-Hong Roland},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2024.6},
  URN =		{urn:nbn:de:0030-drops-205287},
  doi =		{10.4230/LIPIcs.SAT.2024.6},
  annote =	{Keywords: Satisfiability Solving, Congruence Closure, Structural Hashing, SAT Sweeping, Conjunctive Normal Form, Combinational Equivalence Checking, Hardware Equivalence Checking}
}
Document
Enhancing MaxSAT Local Search via a Unified Soft Clause Weighting Scheme

Authors: Yi Chu, Chu-Min Li, Furong Ye, and Shaowei Cai

Published in: LIPIcs, Volume 305, 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)


Abstract
Local search has been widely applied to solve the well-known (weighted) partial MaxSAT problem, significantly influencing many real-world applications. The main difficulty to overcome when designing a local search algorithm is that it can easily fall into local optima. Clause weighting is a beneficial technique that dynamically adjusts the landscape of search space to help the algorithm escape from local optima. Existing works tend to increase the weights of falsified clauses, and such strategies may result in an unpredictable landscape of search space during the optimization process. Therefore, in this paper, we propose a Unified Soft Clause Weighting Scheme called Unified-SW, which increases the weights of all soft clauses in feasible local optima, whether they are satisfied or not, while preserving the hierarchy among them. We implemented Unified-SW in a new local search solver called USW-LS. Experimental results demonstrate that USW-LS, outperforms the state-of-the-art local search solvers across benchmarks from anytime tracks of recent MaxSAT Evaluations. More promisingly, a hybrid solver combining USW-LS and TT-Open-WBO-Inc won all four categories in the anytime track of MaxSAT Evaluation 2023.

Cite as

Yi Chu, Chu-Min Li, Furong Ye, and Shaowei Cai. Enhancing MaxSAT Local Search via a Unified Soft Clause Weighting Scheme. In 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 305, pp. 8:1-8:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chu_et_al:LIPIcs.SAT.2024.8,
  author =	{Chu, Yi and Li, Chu-Min and Ye, Furong and Cai, Shaowei},
  title =	{{Enhancing MaxSAT Local Search via a Unified Soft Clause Weighting Scheme}},
  booktitle =	{27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)},
  pages =	{8:1--8:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-334-8},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{305},
  editor =	{Chakraborty, Supratik and Jiang, Jie-Hong Roland},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2024.8},
  URN =		{urn:nbn:de:0030-drops-205301},
  doi =		{10.4230/LIPIcs.SAT.2024.8},
  annote =	{Keywords: Weighted Partial MaxSAT, Local Search Method, Weighting Scheme}
}
Document
Polynomial Pass Semi-Streaming Lower Bounds for K-Cores and Degeneracy

Authors: Sepehr Assadi, Prantar Ghosh, Bruno Loff, Parth Mittal, and Sagnik Mukhopadhyay

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
The following question arises naturally in the study of graph streaming algorithms: Is there any graph problem which is "not too hard", in that it can be solved efficiently with total communication (nearly) linear in the number n of vertices, and for which, nonetheless, any streaming algorithm with Õ(n) space (i.e., a semi-streaming algorithm) needs a polynomial n^Ω(1) number of passes? Assadi, Chen, and Khanna [STOC 2019] were the first to prove that this is indeed the case. However, the lower bounds that they obtained are for rather non-standard graph problems. Our first main contribution is to present the first polynomial-pass lower bounds for natural "not too hard" graph problems studied previously in the streaming model: k-cores and degeneracy. We devise a novel communication protocol for both problems with near-linear communication, thus showing that k-cores and degeneracy are natural examples of "not too hard" problems. Indeed, previous work have developed single-pass semi-streaming algorithms for approximating these problems. In contrast, we prove that any semi-streaming algorithm for exactly solving these problems requires (almost) Ω(n^{1/3}) passes. The lower bound follows by a reduction from a generalization of the hidden pointer chasing (HPC) problem of Assadi, Chen, and Khanna, which is also the basis of their earlier semi-streaming lower bounds. Our second main contribution is improved round-communication lower bounds for the underlying communication problems at the basis of these reductions: - We improve the previous lower bound of Assadi, Chen, and Khanna for HPC to achieve optimal bounds for this problem. - We further observe that all current reductions from HPC can also work with a generalized version of this problem that we call MultiHPC, and prove an even stronger and optimal lower bound for this generalization. These two results collectively allow us to improve the resulting pass lower bounds for semi-streaming algorithms by a polynomial factor, namely, from n^{1/5} to n^{1/3} passes.

Cite as

Sepehr Assadi, Prantar Ghosh, Bruno Loff, Parth Mittal, and Sagnik Mukhopadhyay. Polynomial Pass Semi-Streaming Lower Bounds for K-Cores and Degeneracy. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 7:1-7:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{assadi_et_al:LIPIcs.CCC.2024.7,
  author =	{Assadi, Sepehr and Ghosh, Prantar and Loff, Bruno and Mittal, Parth and Mukhopadhyay, Sagnik},
  title =	{{Polynomial Pass Semi-Streaming Lower Bounds for K-Cores and Degeneracy}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{7:1--7:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.7},
  URN =		{urn:nbn:de:0030-drops-204035},
  doi =		{10.4230/LIPIcs.CCC.2024.7},
  annote =	{Keywords: Graph streaming, Lower bounds, Communication complexity, k-Cores and degeneracy}
}
Document
Targeted Branching for the Maximum Independent Set Problem Using Graph Neural Networks

Authors: Kenneth Langedal, Demian Hespe, and Peter Sanders

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
Identifying a maximum independent set is a fundamental NP-hard problem. This problem has several real-world applications and requires finding the largest possible set of vertices not adjacent to each other in an undirected graph. Over the past few years, branch-and-bound and branch-and-reduce algorithms have emerged as some of the most effective methods for solving the problem exactly. Specifically, the branch-and-reduce approach, which combines branch-and-bound principles with reduction rules, has proven particularly successful in tackling previously unmanageable real-world instances. This progress was largely made possible by the development of more effective reduction rules. Nevertheless, other key components that can impact the efficiency of these algorithms have not received the same level of interest. Among these is the branching strategy, which determines which vertex to branch on next. Until recently, the most widely used strategy was to choose the vertex of the highest degree. In this work, we present a graph neural network approach for selecting the next branching vertex. The intricate nature of current branch-and-bound solvers makes supervised and reinforcement learning difficult. Therefore, we use a population-based genetic algorithm to evolve the model’s parameters instead. Our proposed approach results in a speedup on 73% of the benchmark instances with a median speedup of 24%.

Cite as

Kenneth Langedal, Demian Hespe, and Peter Sanders. Targeted Branching for the Maximum Independent Set Problem Using Graph Neural Networks. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 20:1-20:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{langedal_et_al:LIPIcs.SEA.2024.20,
  author =	{Langedal, Kenneth and Hespe, Demian and Sanders, Peter},
  title =	{{Targeted Branching for the Maximum Independent Set Problem Using Graph Neural Networks}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{20:1--20:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.20},
  URN =		{urn:nbn:de:0030-drops-203853},
  doi =		{10.4230/LIPIcs.SEA.2024.20},
  annote =	{Keywords: Graphs, Independent Set, Vertex Cover, Graph Neural Networks, Branch-and-Reduce}
}
Document
Track A: Algorithms, Complexity and Games
Approximate Counting for Spin Systems in Sub-Quadratic Time

Authors: Konrad Anand, Weiming Feng, Graham Freifeld, Heng Guo, and Jiaheng Wang

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We present two randomised approximate counting algorithms with Õ(n^{2-c}/ε²) running time for some constant c > 0 and accuracy ε: 1) for the hard-core model with fugacity λ on graphs with maximum degree Δ when λ = O(Δ^{-1.5-c₁}) where c₁ = c/(2-2c); 2) for spin systems with strong spatial mixing (SSM) on planar graphs with quadratic growth, such as ℤ². For the hard-core model, Weitz’s algorithm (STOC, 2006) achieves sub-quadratic running time when correlation decays faster than the neighbourhood growth, namely when λ = o(Δ^{-2}). Our first algorithm does not require this property and extends the range where sub-quadratic algorithms exist. Our second algorithm appears to be the first to achieve sub-quadratic running time up to the SSM threshold, albeit on a restricted family of graphs. It also extends to (not necessarily planar) graphs with polynomial growth, such as ℤ^d, but with a running time of the form Õ(n²ε^{-2}/2^{c(log n)^{1/d}}) where d is the exponent of the polynomial growth and c > 0 is some constant.

Cite as

Konrad Anand, Weiming Feng, Graham Freifeld, Heng Guo, and Jiaheng Wang. Approximate Counting for Spin Systems in Sub-Quadratic Time. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 11:1-11:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{anand_et_al:LIPIcs.ICALP.2024.11,
  author =	{Anand, Konrad and Feng, Weiming and Freifeld, Graham and Guo, Heng and Wang, Jiaheng},
  title =	{{Approximate Counting for Spin Systems in Sub-Quadratic Time}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{11:1--11:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.11},
  URN =		{urn:nbn:de:0030-drops-201543},
  doi =		{10.4230/LIPIcs.ICALP.2024.11},
  annote =	{Keywords: Randomised algorithm, Approximate counting, Spin system, Sub-quadratic algorithm}
}
Document
Track A: Algorithms, Complexity and Games
On the Streaming Complexity of Expander Decomposition

Authors: Yu Chen, Michael Kapralov, Mikhail Makarov, and Davide Mazzali

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
In this paper we study the problem of finding (ε, ϕ)-expander decompositions of a graph in the streaming model, in particular for dynamic streams of edge insertions and deletions. The goal is to partition the vertex set so that every component induces a ϕ-expander, while the number of inter-cluster edges is only an ε fraction of the total volume. It was recently shown that there exists a simple algorithm to construct a (O(ϕ log n), ϕ)-expander decomposition of an n-vertex graph using Õ(n/ϕ²) bits of space [Filtser, Kapralov, Makarov, ITCS'23]. This result calls for understanding the extent to which a dependence in space on the sparsity parameter ϕ is inherent. We move towards answering this question on two fronts. We prove that a (O(ϕ log n), ϕ)-expander decomposition can be found using Õ(n) space, for every ϕ. At the core of our result is the first streaming algorithm for computing boundary-linked expander decompositions, a recently introduced strengthening of the classical notion [Goranci et al., SODA'21]. The key advantage is that a classical sparsifier [Fung et al., STOC'11], with size independent of ϕ, preserves the cuts inside the clusters of a boundary-linked expander decomposition within a multiplicative error. Notable algorithmic applications use sequences of expander decompositions, in particular one often repeatedly computes a decomposition of the subgraph induced by the inter-cluster edges (e.g., the seminal work of Spielman and Teng on spectral sparsifiers [Spielman, Teng, SIAM Journal of Computing 40(4)], or the recent maximum flow breakthrough [Chen et al., FOCS'22], among others). We prove that any streaming algorithm that computes a sequence of (O(ϕ log n), ϕ)-expander decompositions requires Ω̃(n/ϕ) bits of space, even in insertion only streams.

Cite as

Yu Chen, Michael Kapralov, Mikhail Makarov, and Davide Mazzali. On the Streaming Complexity of Expander Decomposition. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 46:1-46:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2024.46,
  author =	{Chen, Yu and Kapralov, Michael and Makarov, Mikhail and Mazzali, Davide},
  title =	{{On the Streaming Complexity of Expander Decomposition}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{46:1--46:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.46},
  URN =		{urn:nbn:de:0030-drops-201890},
  doi =		{10.4230/LIPIcs.ICALP.2024.46},
  annote =	{Keywords: Graph Sketching, Dynamic Streaming, Expander Decomposition}
}
Document
Track A: Algorithms, Complexity and Games
Better Sparsifiers for Directed Eulerian Graphs

Authors: Sushant Sachdeva, Anvith Thudi, and Yibin Zhao

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Spectral sparsification for directed Eulerian graphs is a key component in the design of fast algorithms for solving directed Laplacian linear systems. Directed Laplacian linear system solvers are crucial algorithmic primitives to fast computation of fundamental problems on random walks, such as computing stationary distributions, hitting and commute times, and personalized PageRank vectors. While spectral sparsification is well understood for undirected graphs and it is known that for every graph G, (1+ε)-sparsifiers with O(nε^{-2}) edges exist [Batson-Spielman-Srivastava, STOC '09] (which is optimal), the best known constructions of Eulerian sparsifiers require Ω(nε^{-2}log⁴ n) edges and are based on short-cycle decompositions [Chu et al., FOCS '18]. In this paper, we give improved constructions of Eulerian sparsifiers, specifically: 1) We show that for every directed Eulerian graph G→, there exists an Eulerian sparsifier with O(nε^{-2} log² n log²log n + nε^{-4/3}log^{8/3} n) edges. This result is based on combining short-cycle decompositions [Chu-Gao-Peng-Sachdeva-Sawlani-Wang, FOCS '18, SICOMP] and [Parter-Yogev, ICALP '19], with recent progress on the matrix Spencer conjecture [Bansal-Meka-Jiang, STOC '23]. 2) We give an improved analysis of the constructions based on short-cycle decompositions, giving an m^{1+δ}-time algorithm for any constant δ > 0 for constructing Eulerian sparsifiers with O(nε^{-2}log³ n) edges.

Cite as

Sushant Sachdeva, Anvith Thudi, and Yibin Zhao. Better Sparsifiers for Directed Eulerian Graphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 119:1-119:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{sachdeva_et_al:LIPIcs.ICALP.2024.119,
  author =	{Sachdeva, Sushant and Thudi, Anvith and Zhao, Yibin},
  title =	{{Better Sparsifiers for Directed Eulerian Graphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{119:1--119:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.119},
  URN =		{urn:nbn:de:0030-drops-202628},
  doi =		{10.4230/LIPIcs.ICALP.2024.119},
  annote =	{Keywords: Graph algorithms, Linear algebra and computation, Discrepancy theory}
}
Document
APPROX
Hardy-Muckenhoupt Bounds for Laplacian Eigenvalues

Authors: Gary L. Miller, Noel J. Walkington, and Alex L. Wang

Published in: LIPIcs, Volume 145, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)


Abstract
We present two graph quantities Psi(G,S) and Psi_2(G) which give constant factor estimates to the Dirichlet and Neumann eigenvalues, lambda(G,S) and lambda_2(G), respectively. Our techniques make use of a discrete Hardy-type inequality due to Muckenhoupt.

Cite as

Gary L. Miller, Noel J. Walkington, and Alex L. Wang. Hardy-Muckenhoupt Bounds for Laplacian Eigenvalues. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 8:1-8:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{miller_et_al:LIPIcs.APPROX-RANDOM.2019.8,
  author =	{Miller, Gary L. and Walkington, Noel J. and Wang, Alex L.},
  title =	{{Hardy-Muckenhoupt Bounds for Laplacian Eigenvalues}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)},
  pages =	{8:1--8:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-125-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{145},
  editor =	{Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.8},
  URN =		{urn:nbn:de:0030-drops-112236},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2019.8},
  annote =	{Keywords: Hardy, Muckenhoupt, Laplacian, eigenvalue, effective resistance}
}
  • Refine by Author
  • 1 Aceto, Luca
  • 1 Anand, Konrad
  • 1 Assadi, Sepehr
  • 1 Biere, Armin
  • 1 Cai, Shaowei
  • Show More...

  • Refine by Classification
  • 2 Mathematics of computing → Combinatorial optimization
  • 2 Mathematics of computing → Graph algorithms
  • 2 Theory of computation → Automated reasoning
  • 2 Theory of computation → Constraint and logic programming
  • 2 Theory of computation → Graph algorithms analysis
  • Show More...

  • Refine by Keyword
  • 1 Approximate counting
  • 1 Branch-and-Reduce
  • 1 Combinational Equivalence Checking
  • 1 Communication complexity
  • 1 Congruence Closure
  • Show More...

  • Refine by Type
  • 17 document

  • Refine by Publication Year
  • 14 2024
  • 1 2012
  • 1 2017
  • 1 2019

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail