43 Search Results for "Li, Yan"


Document
A Bayesian Rolling Horizon Approach for Rolling Stock Rotation Planning with Predictive Maintenance

Authors: Felix Prause and Ralf Borndörfer

Published in: OASIcs, Volume 123, 24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024)


Abstract
We consider the rolling stock rotation planning problem with predictive maintenance (RSRP-PdM), where a timetable given by a set of trips must be operated by a fleet of vehicles. Here, the health states of the vehicles are assumed to be random variables, and their maintenance schedule should be planned based on their predicted failure probabilities. Utilizing the Bayesian update step of the Kalman filter, we develop a rolling horizon approach for RSRP-PdM, in which the predicted health state distributions are updated as new data become available. This approach reduces the uncertainty of the health states and thus improves the decision-making basis for maintenance planning. To solve the instances, we employ a local neighborhood search, which is a modification of a heuristic for RSRP-PdM, and demonstrate its effectiveness. Using this solution algorithm, the presented approach is compared with the results of common maintenance strategies on test instances derived from real-world timetables. The obtained results show the benefits of the rolling horizon approach.

Cite as

Felix Prause and Ralf Borndörfer. A Bayesian Rolling Horizon Approach for Rolling Stock Rotation Planning with Predictive Maintenance. In 24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024). Open Access Series in Informatics (OASIcs), Volume 123, pp. 13:1-13:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{prause_et_al:OASIcs.ATMOS.2024.13,
  author =	{Prause, Felix and Bornd\"{o}rfer, Ralf},
  title =	{{A Bayesian Rolling Horizon Approach for Rolling Stock Rotation Planning with Predictive Maintenance}},
  booktitle =	{24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024)},
  pages =	{13:1--13:19},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-350-8},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{123},
  editor =	{Bouman, Paul C. and Kontogiannis, Spyros C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ATMOS.2024.13},
  URN =		{urn:nbn:de:0030-drops-212013},
  doi =		{10.4230/OASIcs.ATMOS.2024.13},
  annote =	{Keywords: Rolling stock rotation planning, Predictive maintenance, Rolling horizon approach, Bayesian inference, Local neighborhood search}
}
Document
Invited Talk
Recent Progress on Correlation Clustering: From Local Algorithms to Better Approximation Algorithms and Back (Invited Talk)

Authors: Vincent Cohen-Addad

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
Correlation clustering is a classic model for clustering problems arising in machine learning and data mining. Given a set of data elements represented as vertices of a graph and pairwise similarity represented as edges, the goal is to find a partition of the vertex set so as to minimize the total number of edges across the parts plus the total number of non-edges within the parts. Introduced in the early 2000s [Bansal et al., 2004], correlation clustering has received a large amount of attention through the years. A natural linear programming relaxation was shown to have an integrality gap of at least 2 and at most 2.5 [Ailon et al., 2008] in 2005, and in 2015 at most 2.06 [Chawla et al., 2015]. In 2021, motivated by large-scale application new structural insights allowed to derive a simple, practical algorithm that achieved an O(1)-approximation in a variety of models (Massively Parallel, Sublinear, Streaming or Differentially-private) [Vincent Cohen{-}Addad et al., 2021; Cohen-Addad et al., 2022]. These new insights turned out to be a key building block in designing better algorithms: It serves as a pre-clustering of the input graph that enables algorithm with approximation guarantees significantly better than 2 [Vincent Cohen{-}Addad et al., 2023; Vincent Cohen{-}Addad et al., 2022]. It is a key component in the new algorithm that achieves a 1.44-approximation [Nairen Cao et al., 2024] and in the new local-search based 1.84-approximation for the Massively Parallel, Sublinear, and Streaming models [Vincent Cohen{-}Addad et al., 2024]. This talk will review the above recent development and what are the main open research directions. A collection of joint works with Nairen Cao, Silvio Lattanzi, Euiwoong Lee, Shi Li, David Rasmussen Lolck, Slobodan Mitrovic, Alantha Newman, Ashkan Norouzi-Fard, Nikos Parotsidis, Marcin Pilipczuk, Jakub Tarnawski, Mikkel Thorup, Lukas Vogl, Shuyi Yan, Hanwen Zhang.

Cite as

Vincent Cohen-Addad. Recent Progress on Correlation Clustering: From Local Algorithms to Better Approximation Algorithms and Back (Invited Talk). In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 1:1-1:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cohenaddad:LIPIcs.ESA.2024.1,
  author =	{Cohen-Addad, Vincent},
  title =	{{Recent Progress on Correlation Clustering: From Local Algorithms to Better Approximation Algorithms and Back}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{1:1--1:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.1},
  URN =		{urn:nbn:de:0030-drops-210728},
  doi =		{10.4230/LIPIcs.ESA.2024.1},
  annote =	{Keywords: Approximation Algorithms, Clustering, Local Model}
}
Document
New Algorithms and Lower Bounds for Streaming Tournaments

Authors: Prantar Ghosh and Sahil Kuchlous

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We study fundamental directed graph (digraph) problems in the streaming model. An initial investigation by Chakrabarti, Ghosh, McGregor, and Vorotnikova [SODA'20] on streaming digraphs showed that while most of these problems are provably hard in general, some of them become tractable when restricted to the well-studied class of tournament graphs where every pair of nodes shares exactly one directed edge. Thus, we focus on tournaments and improve the state of the art for multiple problems in terms of both upper and lower bounds. Our primary upper bound is a deterministic single-pass semi-streaming algorithm (using Õ(n) space for n-node graphs, where Õ(.) hides polylog(n) factors) for decomposing a tournament into strongly connected components (SCC). It improves upon the previously best-known algorithm by Baweja, Jia, and Woodruff [ITCS'22] in terms of both space and passes: for p ⩾ 1, they used (p+1) passes and Õ(n^{1+1/p}) space. We further extend our algorithm to digraphs that are close to tournaments and establish tight bounds demonstrating that the problem’s complexity grows smoothly with the "distance" from tournaments. Applying our SCC-decomposition framework, we obtain improved - and in some cases, optimal - tournament algorithms for s,t-reachability, strong connectivity, Hamiltonian paths and cycles, and feedback arc set. On the other hand, we prove lower bounds exhibiting that some well-studied problems - such as (exact) feedback arc set and s,t-distance - remain hard (require Ω(n²) space) on tournaments. Moreover, we generalize the former problem’s lower bound to establish space-approximation tradeoffs: any single-pass (1± ε)-approximation algorithm requires Ω(n/√{ε}) space. Finally, we settle the streaming complexities of two basic digraph problems studied by prior work: acyclicity testing of tournaments and sink finding in DAGs. As a whole, our collection of results contributes significantly to the growing literature on streaming digraphs.

Cite as

Prantar Ghosh and Sahil Kuchlous. New Algorithms and Lower Bounds for Streaming Tournaments. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 60:1-60:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ghosh_et_al:LIPIcs.ESA.2024.60,
  author =	{Ghosh, Prantar and Kuchlous, Sahil},
  title =	{{New Algorithms and Lower Bounds for Streaming Tournaments}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{60:1--60:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.60},
  URN =		{urn:nbn:de:0030-drops-211318},
  doi =		{10.4230/LIPIcs.ESA.2024.60},
  annote =	{Keywords: tournaments, streaming algorithms, graph algorithms, communication complexity, strongly connected components, reachability, feedback arc set}
}
Document
Shortest Path Separators in Unit Disk Graphs

Authors: Elfarouk Harb, Zhengcheng Huang, and Da Wei Zheng

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We introduce a new balanced separator theorem for unit-disk graphs involving two shortest paths combined with the 1-hop neighbours of those paths and two other vertices. This answers an open problem of Yan, Xiang and Dragan [CGTA '12] and improves their result that requires removing the 3-hop neighbourhood of two shortest paths. Our proof uses very different ideas, including Delaunay triangulations and a generalization of the celebrated balanced separator theorem of Lipton and Tarjan [J. Appl. Math. '79] to systems of non-intersecting paths.

Cite as

Elfarouk Harb, Zhengcheng Huang, and Da Wei Zheng. Shortest Path Separators in Unit Disk Graphs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 66:1-66:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{harb_et_al:LIPIcs.ESA.2024.66,
  author =	{Harb, Elfarouk and Huang, Zhengcheng and Zheng, Da Wei},
  title =	{{Shortest Path Separators in Unit Disk Graphs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{66:1--66:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.66},
  URN =		{urn:nbn:de:0030-drops-211375},
  doi =		{10.4230/LIPIcs.ESA.2024.66},
  annote =	{Keywords: Balanced shortest path separators, unit disk graphs, crossings}
}
Document
The Last Success Problem with Samples

Authors: Toru Yoshinaga and Yasushi Kawase

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
The last success problem is an optimal stopping problem that aims to maximize the probability of stopping on the last success in a sequence of independent n Bernoulli trials. In the classical setting where complete information about the distributions is available, Bruss [Bruss, 2000] provided an optimal stopping policy that ensures a winning probability of 1/e. However, assuming complete knowledge of the distributions is unrealistic in many practical applications. This paper investigates a variant of the last success problem where samples from each distribution are available instead of complete knowledge of them. When a single sample from each distribution is allowed, we provide a deterministic policy that guarantees a winning probability of 1/4. This is best possible by the upper bound provided by Nuti and Vondrák [Nuti and Vondr{á}k, 2023]. Furthermore, for any positive constant ε, we show that a constant number of samples from each distribution is sufficient to guarantee a winning probability of 1/e-ε.

Cite as

Toru Yoshinaga and Yasushi Kawase. The Last Success Problem with Samples. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 105:1-105:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{yoshinaga_et_al:LIPIcs.ESA.2024.105,
  author =	{Yoshinaga, Toru and Kawase, Yasushi},
  title =	{{The Last Success Problem with Samples}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{105:1--105:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.105},
  URN =		{urn:nbn:de:0030-drops-211762},
  doi =		{10.4230/LIPIcs.ESA.2024.105},
  annote =	{Keywords: The Last Success Problem, Secretary Problem, Sample Information Model, Optimal Stopping, Online Algorithms}
}
Document
APPROX
Degrees and Network Design: New Problems and Approximations

Authors: Michael Dinitz, Guy Kortsarz, and Shi Li

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
While much of network design focuses mostly on cost (number or weight of edges), node degrees have also played an important role. They have traditionally either appeared as an objective, to minimize the maximum degree (e.g., the Minimum Degree Spanning Tree problem), or as constraints that might be violated to give bicriteria approximations (e.g., the Minimum Cost Degree Bounded Spanning Tree problem). We extend the study of degrees in network design in two ways. First, we introduce and study a new variant of the Survivable Network Design Problem where in addition to the traditional objective of minimizing the cost of the chosen edges, we add a constraint that the 𝓁_p-norm of the node degree vector is bounded by an input parameter. This interpolates between the classical settings of maximum degree (the 𝓁_∞-norm) and the number of edges (the 𝓁₁-degree), and has natural applications in distributed systems and VLSI design. We give a constant bicriteria approximation in both measures using convex programming. Second, we provide a polylogarithmic bicriteria approximation for the Degree Bounded Group Steiner problem on bounded treewidth graphs, solving an open problem from [Guy Kortsarz and Zeev Nutov, 2022] and [X. Guo et al., 2022].

Cite as

Michael Dinitz, Guy Kortsarz, and Shi Li. Degrees and Network Design: New Problems and Approximations. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 3:1-3:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dinitz_et_al:LIPIcs.APPROX/RANDOM.2024.3,
  author =	{Dinitz, Michael and Kortsarz, Guy and Li, Shi},
  title =	{{Degrees and Network Design: New Problems and Approximations}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{3:1--3:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.3},
  URN =		{urn:nbn:de:0030-drops-209969},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.3},
  annote =	{Keywords: Network Design, Degrees}
}
Document
APPROX
On the Generalized Mean Densest Subgraph Problem: Complexity and Algorithms

Authors: Karthekeyan Chandrasekaran, Chandra Chekuri, Manuel R. Torres, and Weihao Zhu

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
Dense subgraph discovery is an important problem in graph mining and network analysis with several applications. Two canonical polynomial-time solvable problems here are to find a maxcore (subgraph of maximum min degree) and to find a densest subgraph (subgraph of maximum average degree). Both of these problems can be solved in polynomial time. Veldt, Benson, and Kleinberg [Veldt et al., 2021] introduced the generalized p-mean densest subgraph problem which captures the maxcore problem when p = -∞ and the densest subgraph problem when p = 1. They observed that for p ≥ 1, the objective function is supermodular and hence the problem can be solved in polynomial time. In this work, we focus on the p-mean densest subgraph problem for p ∈ (-∞, 1). We prove that for every p ∈ (-∞,1), the problem is NP-hard, thus resolving an open question from [Veldt et al., 2021]. We also show that for every p ∈ (0,1), the weighted version of the problem is APX-hard. On the algorithmic front, we describe two simple 1/2-approximation algorithms for every p ∈ (-∞, 1). We complement the approximation algorithms by exhibiting non-trivial instances on which the algorithms simultaneously achieve an approximation factor of at most 1/2.

Cite as

Karthekeyan Chandrasekaran, Chandra Chekuri, Manuel R. Torres, and Weihao Zhu. On the Generalized Mean Densest Subgraph Problem: Complexity and Algorithms. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 9:1-9:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chandrasekaran_et_al:LIPIcs.APPROX/RANDOM.2024.9,
  author =	{Chandrasekaran, Karthekeyan and Chekuri, Chandra and Torres, Manuel R. and Zhu, Weihao},
  title =	{{On the Generalized Mean Densest Subgraph Problem: Complexity and Algorithms}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{9:1--9:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.9},
  URN =		{urn:nbn:de:0030-drops-210025},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.9},
  annote =	{Keywords: Densest subgraph problem, Hardness of approximation, Approximation algorithms}
}
Document
RANDOM
Hilbert Functions and Low-Degree Randomness Extractors

Authors: Alexander Golovnev, Zeyu Guo, Pooya Hatami, Satyajeet Nagargoje, and Chao Yan

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
For S ⊆ 𝔽ⁿ, consider the linear space of restrictions of degree-d polynomials to S. The Hilbert function of S, denoted h_S(d,𝔽), is the dimension of this space. We obtain a tight lower bound on the smallest value of the Hilbert function of subsets S of arbitrary finite grids in 𝔽ⁿ with a fixed size |S|. We achieve this by proving that this value coincides with a combinatorial quantity, namely the smallest number of low Hamming weight points in a down-closed set of size |S|. Understanding the smallest values of Hilbert functions is closely related to the study of degree-d closure of sets, a notion introduced by Nie and Wang (Journal of Combinatorial Theory, Series A, 2015). We use bounds on the Hilbert function to obtain a tight bound on the size of degree-d closures of subsets of 𝔽_qⁿ, which answers a question posed by Doron, Ta-Shma, and Tell (Computational Complexity, 2022). We use the bounds on the Hilbert function and degree-d closure of sets to prove that a random low-degree polynomial is an extractor for samplable randomness sources. Most notably, we prove the existence of low-degree extractors and dispersers for sources generated by constant-degree polynomials and polynomial-size circuits. Until recently, even the existence of arbitrary deterministic extractors for such sources was not known.

Cite as

Alexander Golovnev, Zeyu Guo, Pooya Hatami, Satyajeet Nagargoje, and Chao Yan. Hilbert Functions and Low-Degree Randomness Extractors. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 41:1-41:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{golovnev_et_al:LIPIcs.APPROX/RANDOM.2024.41,
  author =	{Golovnev, Alexander and Guo, Zeyu and Hatami, Pooya and Nagargoje, Satyajeet and Yan, Chao},
  title =	{{Hilbert Functions and Low-Degree Randomness Extractors}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{41:1--41:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.41},
  URN =		{urn:nbn:de:0030-drops-210345},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.41},
  annote =	{Keywords: Extractors, Dispersers, Circuits, Hilbert Function, Randomness, Low Degree Polynomials}
}
Document
DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance

Authors: Rundong Gan, Liyi Zhou, Le Wang, Kaihua Qin, and Xiaodong Lin

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Decentralized Finance (DeFi) has witnessed a monumental surge, reaching 53.039 billion USD in total value locked. As this sector continues to expand, ensuring the reliability of DeFi smart contracts becomes increasingly crucial. While some users are adept at reading code or the compiled bytecode to understand smart contracts, many rely on documentation. Therefore, discrepancies between the documentation and the deployed code can pose significant risks, whether these discrepancies are due to errors or intentional fraud. To tackle these challenges, we developed DeFiAligner, an end-to-end system to identify inconsistencies between documentation and smart contracts. DeFiAligner incorporates a symbolic execution tool, SEVM, which explores execution paths of on-chain binary code, recording memory and stack states. It automatically generates symbolic expressions for token balance changes and branch conditions, which, along with related project documents, are processed by LLMs. Using structured prompts, the LLMs evaluate the alignment between the symbolic expressions and the documentation. Our tests across three distinct scenarios demonstrate DeFiAligner’s capability to automate inconsistency detection in DeFi, achieving recall rates of 92% and 90% on two public datasets respectively.

Cite as

Rundong Gan, Liyi Zhou, Le Wang, Kaihua Qin, and Xiaodong Lin. DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 7:1-7:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gan_et_al:LIPIcs.AFT.2024.7,
  author =	{Gan, Rundong and Zhou, Liyi and Wang, Le and Qin, Kaihua and Lin, Xiaodong},
  title =	{{DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{7:1--7:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.7},
  URN =		{urn:nbn:de:0030-drops-209431},
  doi =		{10.4230/LIPIcs.AFT.2024.7},
  annote =	{Keywords: Decentralized Finance Security, Large Language Models, Project Review, Symbolic Analysis, Smart Contracts}
}
Document
SoK: Zero-Knowledge Range Proofs

Authors: Miranda Christ, Foteini Baldimtsi, Konstantinos Kryptos Chalkias, Deepak Maram, Arnab Roy, and Joy Wang

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Zero-knowledge range proofs (ZKRPs) allow a prover to convince a verifier that a secret value lies in a given interval. ZKRPs have numerous applications: from anonymous credentials and auctions, to confidential transactions in cryptocurrencies. At the same time, a plethora of ZKRP constructions exist in the literature, each with its own trade-offs. In this work, we systematize the knowledge around ZKRPs. We create a classification of existing constructions based on the underlying building techniques, and we summarize their properties. We provide comparisons between schemes both in terms of properties as well as efficiency levels, and construct a guideline to assist in the selection of an appropriate ZKRP for different application requirements. Finally, we discuss a number of interesting open research problems.

Cite as

Miranda Christ, Foteini Baldimtsi, Konstantinos Kryptos Chalkias, Deepak Maram, Arnab Roy, and Joy Wang. SoK: Zero-Knowledge Range Proofs. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 14:1-14:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{christ_et_al:LIPIcs.AFT.2024.14,
  author =	{Christ, Miranda and Baldimtsi, Foteini and Chalkias, Konstantinos Kryptos and Maram, Deepak and Roy, Arnab and Wang, Joy},
  title =	{{SoK: Zero-Knowledge Range Proofs}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{14:1--14:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.14},
  URN =		{urn:nbn:de:0030-drops-209504},
  doi =		{10.4230/LIPIcs.AFT.2024.14},
  annote =	{Keywords: Range proofs, zero knowledge}
}
Document
Transaction Fee Mechanism Design in a Post-MEV World

Authors: Maryam Bahrani, Pranav Garimidi, and Tim Roughgarden

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
The incentive-compatibility properties of blockchain transaction fee mechanisms have been investigated with passive block producers that are motivated purely by the net rewards earned at the consensus layer. This paper introduces a model of active block producers that have their own private valuations for blocks (representing, for example, additional value derived from the application layer). The block producer surplus in our model can be interpreted as one of the more common colloquial meanings of the phrase "maximal extractable value (MEV)." We first prove that transaction fee mechanism design is fundamentally more difficult with active block producers than with passive ones: With active block producers, no non-trivial or approximately welfare-maximizing transaction fee mechanism can be incentive-compatible for both users and block producers. These results can be interpreted as a mathematical justification for augmenting transaction fee mechanisms with additional components such as order flow auctions, block producer competition, trusted hardware, or cryptographic techniques. We then consider a more fine-grained model of block production that more accurately reflects current practice, in which we distinguish the roles of "searchers" (who actively identify opportunities for value extraction from the application layer and compete for the right to take advantage of them) and "proposers" (who participate directly in the blockchain protocol and make the final choice of the published block). Searchers can effectively act as an "MEV oracle" for a transaction fee mechanism, thereby enlarging the design space. Here, we first consider a TFM that is inspired by how searchers have traditionally been incorporated into the block production process, with each transaction effectively sold off to a searcher through a first-price auction. We then explore the TFM design space with searchers more generally, and design a mechanism that circumvents our impossibility results for TFMs without searchers. Our mechanism (the "SAKA" mechanism) is incentive-compatible (for users, searchers, and the block producer), sybil-proof, and guarantees roughly 50% of the maximum-possible welfare when transaction sizes are small relative to block sizes. We conclude with a matching negative result: even when transaction sizes are small, no DSIC and sybil-proof deterministic TFM can guarantee more than 50% of the maximum-possible welfare.

Cite as

Maryam Bahrani, Pranav Garimidi, and Tim Roughgarden. Transaction Fee Mechanism Design in a Post-MEV World. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 29:1-29:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bahrani_et_al:LIPIcs.AFT.2024.29,
  author =	{Bahrani, Maryam and Garimidi, Pranav and Roughgarden, Tim},
  title =	{{Transaction Fee Mechanism Design in a Post-MEV World}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{29:1--29:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.29},
  URN =		{urn:nbn:de:0030-drops-209658},
  doi =		{10.4230/LIPIcs.AFT.2024.29},
  annote =	{Keywords: MEV, Transaction Fee Mechanisms, Auctions}
}
Document
Dynamically Generating Callback Summaries for Enhancing Static Analysis

Authors: Steven Arzt, Marc Miltenberger, and Julius Näumann

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Interprocedural static analyses require a complete and precise callgraph. Since third-party libraries are responsible for large portions of the code of an app, a substantial fraction of the effort in callgraph generation is therefore spent on the library code for each app. For analyses that are oblivious to the inner workings of a library and only require the user code to be processed, the library can be replaced with a summary that allows to reconstruct the callbacks from library code back to user code. To improve performance, we propose the automatic generation and use of precise pre-computed callgraph summaries for commonly used libraries. Reflective method calls within libraries and callback-driven APIs pose further challenges for generating precise callgraphs using static analysis. Pre-computed summaries can also help analyses avoid these challenges. We present CGMiner, an approach for automatically generating callgraph models for library code. It dynamically observes sample apps that use one or more particular target libraries. As we show, CGMiner yields more than 94% of correct edges, whereas existing work only achieves around 33% correct edges. CGMiner avoids the high false positive rate of existing tools. We show that CGMiner integrated into FlowDroid uncovers 40% more data flows than our baseline without callback summaries.

Cite as

Steven Arzt, Marc Miltenberger, and Julius Näumann. Dynamically Generating Callback Summaries for Enhancing Static Analysis. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 4:1-4:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{arzt_et_al:LIPIcs.ECOOP.2024.4,
  author =	{Arzt, Steven and Miltenberger, Marc and N\"{a}umann, Julius},
  title =	{{Dynamically Generating Callback Summaries for Enhancing Static Analysis}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{4:1--4:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.4},
  URN =		{urn:nbn:de:0030-drops-208533},
  doi =		{10.4230/LIPIcs.ECOOP.2024.4},
  annote =	{Keywords: dynamic analysis, callback detection, java, android}
}
Document
HOBBIT: Hashed OBject Based InTegrity

Authors: Matthias Bernad and Stefan Brunthaler

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
C vulnerabilities usually hold verbatim for C++ programs. The counterfeit-object-oriented programming attack demonstrated that this relation is asymmetric, i.e., it only applies to C++. The problem pinpointed by this COOP attack is that C++ does not validate the integrity of its objects. By injecting malicious objects with manipulated virtual function table pointers, attackers can hijack control-flow of programs. The software security community addressed the COOP-problem in the years following its discovery, but together with the emergence of transient-execution attacks, such as Spectre, researchers also shifted their attention. We present Hobbit, a software-only solution to prevent COOP attacks by validating object integrity for virtual function pointer tables. Hobbit does not require any hardware specific features, scales to multi-million lines of C++ source code, and our LLVM-based implementation offers a configurable performance impact between 121.63% and 2.80% on compute-intensive SPEC CPU C++ benchmarks. Hobbit’s security analysis indicates strong resistance to brute forcing attacks and demonstrates additional benefits of using execute-only memory.

Cite as

Matthias Bernad and Stefan Brunthaler. HOBBIT: Hashed OBject Based InTegrity. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 7:1-7:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bernad_et_al:LIPIcs.ECOOP.2024.7,
  author =	{Bernad, Matthias and Brunthaler, Stefan},
  title =	{{HOBBIT: Hashed OBject Based InTegrity}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{7:1--7:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.7},
  URN =		{urn:nbn:de:0030-drops-208566},
  doi =		{10.4230/LIPIcs.ECOOP.2024.7},
  annote =	{Keywords: software security, code-reuse attacks, language-based security, counterfeit-object-oriented programming, object integrity, compiler security}
}
Document
A CFL-Reachability Formulation of Callsite-Sensitive Pointer Analysis with Built-In On-The-Fly Call Graph Construction

Authors: Dongjie He, Jingbo Lu, and Jingling Xue

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
In object-oriented languages, the traditional CFL-reachability formulation for k-callsite-sensitive pointer analysis (kCFA) focuses on modeling field accesses and calling contexts, but it relies on a separate algorithm for call graph construction. This division can result in a loss of precision in kCFA, a problem that persists even when using the most precise call graphs, whether pre-constructed or generated on the fly. Moreover, pre-analyses based on this framework aiming to improve the efficiency of kCFA may inadvertently reduce its precision, due to the framework’s lack of native call graph construction, essential for precise analysis. Addressing this gap, this paper introduces a novel CFL-reachability formulation of kCFA for Java, uniquely integrating on-the-fly call graph construction. This advancement not only addresses the precision loss inherent in the traditional CFL-reachability-based approach but also enhances its overall applicability. In a significant secondary contribution, we present the first precision-preserving pre-analysis to accelerate kCFA. This pre-analysis leverages selective context sensitivity to improve the efficiency of kCFA without sacrificing its precision. Collectively, these contributions represent a substantial step forward in pointer analysis, offering both theoretical and practical advancements that could benefit future developments in the field.

Cite as

Dongjie He, Jingbo Lu, and Jingling Xue. A CFL-Reachability Formulation of Callsite-Sensitive Pointer Analysis with Built-In On-The-Fly Call Graph Construction. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 18:1-18:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{he_et_al:LIPIcs.ECOOP.2024.18,
  author =	{He, Dongjie and Lu, Jingbo and Xue, Jingling},
  title =	{{A CFL-Reachability Formulation of Callsite-Sensitive Pointer Analysis with Built-In On-The-Fly Call Graph Construction}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{18:1--18:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.18},
  URN =		{urn:nbn:de:0030-drops-208674},
  doi =		{10.4230/LIPIcs.ECOOP.2024.18},
  annote =	{Keywords: Pointer Analysis, CFL Reachability, Call Graph Construction}
}
Document
Scaling Interprocedural Static Data-Flow Analysis to Large C/C++ Applications: An Experience Report

Authors: Fabian Schiebel, Florian Sattler, Philipp Dominik Schubert, Sven Apel, and Eric Bodden

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Interprocedural data-flow analysis is important for computing precise information on whole programs. In theory, the popular algorithmic framework interprocedural distributive environments (IDE) provides a tool to solve distributive interprocedural data-flow problems efficiently. Yet, unfortunately, available state-of-the-art implementations of the IDE framework start to run into scalability issues for programs with several thousands of lines of code, depending on the static analysis domain. Since the IDE framework is a basic building block for many static program analyses, this presents a serious limitation. In this paper, we report on our experience with making the IDE algorithm scale to C/C++ applications with up to 500 000 lines of code. We analyze the IDE algorithm and its state-of-the-art implementations to identify their weaknesses related to scalability at both a conceptual and implementation level. Based on this analysis, we propose several optimizations to overcome these weaknesses, aiming at a sweet spot between reducing running time and memory consumption. As a result, we provide an improved IDE solver that implements our optimizations within the PhASAR static analysis framework. Our evaluation on real-world C/C++ applications shows that applying the optimizations speeds up the analysis on average by up to 7×, while also reducing memory consumption by 7× on average as well. For the first time, these optimizations allow us to analyze programs with several hundreds of thousands of lines of LLVM-IR code in reasonable time and space.

Cite as

Fabian Schiebel, Florian Sattler, Philipp Dominik Schubert, Sven Apel, and Eric Bodden. Scaling Interprocedural Static Data-Flow Analysis to Large C/C++ Applications: An Experience Report. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 36:1-36:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{schiebel_et_al:LIPIcs.ECOOP.2024.36,
  author =	{Schiebel, Fabian and Sattler, Florian and Schubert, Philipp Dominik and Apel, Sven and Bodden, Eric},
  title =	{{Scaling Interprocedural Static Data-Flow Analysis to Large C/C++ Applications: An Experience Report}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{36:1--36:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.36},
  URN =		{urn:nbn:de:0030-drops-208859},
  doi =		{10.4230/LIPIcs.ECOOP.2024.36},
  annote =	{Keywords: Interprocedural data-flow analysis, IDE, LLVM, C/C++}
}
  • Refine by Author
  • 2 Cai, Shaowei
  • 2 Li, Xin
  • 2 Li, Yan
  • 2 Shekhar, Shashi
  • 2 Zhong, Yan
  • Show More...

  • Refine by Classification
  • 3 Applied computing → Computational genomics
  • 3 Theory of computation → Approximation algorithms analysis
  • 3 Theory of computation → Pseudorandomness and derandomization
  • 2 Applied computing → Molecular sequence analysis
  • 2 Information systems → Data mining
  • Show More...

  • Refine by Keyword
  • 2 Affine
  • 2 Randomness Extractors
  • 1 AC⁰ circuits
  • 1 Approximation Algorithms
  • 1 Approximation algorithms
  • Show More...

  • Refine by Type
  • 43 document

  • Refine by Publication Year
  • 39 2024
  • 2 2018
  • 1 2007
  • 1 2023

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail