42 Search Results for "Cheng, Yu"


Document
A Faster Algorithm for Finding Negative Cycles in Simple Temporal Networks with Uncertainty

Authors: Luke Hunsberger and Roberto Posenato

Published in: LIPIcs, Volume 318, 31st International Symposium on Temporal Representation and Reasoning (TIME 2024)


Abstract
Temporal constraint networks are data structures for representing and reasoning about time (e.g., temporal constraints among actions in a plan). Finding and computing negative cycles in temporal networks is important for planning and scheduling applications since it is the first step toward resolving inconsistent networks. For Simple Temporal Networks (STNs), the problem reduces to finding simple negative cycles (i.e., no repeat nodes), resulting in numerous efficient algorithms. For Simple Temporal Networks with Uncertainty (STNUs), which accommodate actions with uncertain durations, the situation is more complex because the characteristic of a non-dynamically controllable (non-DC) network is a so-called semi-reducible negative (SRN) cycle, which can have repeat edges and, in the worst case, an exponential number of occurrences of such edges. Algorithms for computing SRN cycles in non-DC STNUs that have been presented so far are based on older, less efficient DC-checking algorithms. In addition, the issue of repeated edges has either been ignored or given scant attention. This paper presents a new, faster algorithm for identifying SRN cycles in non-DC STNUs. Its worst-case time complexity is O(mn + k²n + knlog n), where n is the number of timepoints, m is the number of constraints, and k is the number of actions with uncertain durations. This complexity is the same as that of the fastest DC-checking algorithm for STNUs. It avoids an exponential blow-up by efficiently dealing with repeated structures and outputting a compact representation of the SRN cycle it finds. The space required to compactly store accumulated path information while avoiding redundant storage of repeated edges is O(mk + k²n). An empirical evaluation demonstrates the effectiveness of the new algorithm on an existing benchmark.

Cite as

Luke Hunsberger and Roberto Posenato. A Faster Algorithm for Finding Negative Cycles in Simple Temporal Networks with Uncertainty. In 31st International Symposium on Temporal Representation and Reasoning (TIME 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 318, pp. 9:1-9:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hunsberger_et_al:LIPIcs.TIME.2024.9,
  author =	{Hunsberger, Luke and Posenato, Roberto},
  title =	{{A Faster Algorithm for Finding Negative Cycles in Simple Temporal Networks with Uncertainty}},
  booktitle =	{31st International Symposium on Temporal Representation and Reasoning (TIME 2024)},
  pages =	{9:1--9:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-349-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{318},
  editor =	{Sala, Pietro and Sioutis, Michael and Wang, Fusheng},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TIME.2024.9},
  URN =		{urn:nbn:de:0030-drops-212160},
  doi =		{10.4230/LIPIcs.TIME.2024.9},
  annote =	{Keywords: Temporal constraint networks, overconstrained networks, negative cycles}
}
Document
Robust Execution of Probabilistic STNs

Authors: Luke Hunsberger and Roberto Posenato

Published in: LIPIcs, Volume 318, 31st International Symposium on Temporal Representation and Reasoning (TIME 2024)


Abstract
A Probabilistic Simple Temporal Network (PSTN) is a formalism for representing and reasoning about actions subject to temporal constraints, where some action durations may be uncontrollable, modeled using continuous probability density functions. Recent work aims to manage this kind of uncertainty during execution by approximating a PSTN by a Simple Temporal Network with Uncertainty (STNU) (for which well-known execution strategies exist) and using an STNU execution strategy to execute the PSTN, hoping that its probabilistic action durations will not cause any constraint violations. This paper presents significant improvements to the robust execution of PSTNs. Our approach is based on a recent, faster algorithm for finding negative cycles in non-DC STNUs. We also formally prove that many of the constraints included in others' work are unnecessary and that our algorithm can take advantage of a flexible real-time execution algorithm to react to observations of contingent durations that may fall outside the fixed STNU bounds. The paper presents an empirical evaluation of our approach that provides evidence of its effectiveness in robustly executing PSTNs derived from a publicly available benchmark.

Cite as

Luke Hunsberger and Roberto Posenato. Robust Execution of Probabilistic STNs. In 31st International Symposium on Temporal Representation and Reasoning (TIME 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 318, pp. 12:1-12:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hunsberger_et_al:LIPIcs.TIME.2024.12,
  author =	{Hunsberger, Luke and Posenato, Roberto},
  title =	{{Robust Execution of Probabilistic STNs}},
  booktitle =	{31st International Symposium on Temporal Representation and Reasoning (TIME 2024)},
  pages =	{12:1--12:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-349-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{318},
  editor =	{Sala, Pietro and Sioutis, Michael and Wang, Fusheng},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TIME.2024.12},
  URN =		{urn:nbn:de:0030-drops-212197},
  doi =		{10.4230/LIPIcs.TIME.2024.12},
  annote =	{Keywords: Temporal constraint networks, probabilistic durations, dispatchable networks}
}
Document
Periodic Timetabling: Travel Time vs. Regenerative Energy

Authors: Sven Jäger, Sarah Roth, and Anita Schöbel

Published in: OASIcs, Volume 123, 24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024)


Abstract
While it is important to provide attractive public transportation to the passengers allowing short travel times, it should also be a major concern to reduce the amount of energy used by the public transport system. Electrical trains can regenerate energy when braking, which can be used by a nearby accelerating train. Therefore, apart from the minimization of travel times, the maximization of brake-traction overlaps of nearby trains is an important objective in periodic timetabling. Recently, this has been studied in a model allowing small modifications of a nominal timetable. We investigate the problem of finding periodic timetables that are globally good in both objective functions. We show that the general problem is NP-hard, even restricted to a single transfer station and if only travel time is to be minimized, and give an algorithm with an additive error bound for maximizing the brake-traction overlap on this small network. Moreover, we identify special cases in which the problem is solvable in polynomial time. Finally, we demonstrate the trade-off between the two objective functions in an experimental study.

Cite as

Sven Jäger, Sarah Roth, and Anita Schöbel. Periodic Timetabling: Travel Time vs. Regenerative Energy. In 24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024). Open Access Series in Informatics (OASIcs), Volume 123, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{jager_et_al:OASIcs.ATMOS.2024.10,
  author =	{J\"{a}ger, Sven and Roth, Sarah and Sch\"{o}bel, Anita},
  title =	{{Periodic Timetabling: Travel Time vs. Regenerative Energy}},
  booktitle =	{24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024)},
  pages =	{10:1--10:20},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-350-8},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{123},
  editor =	{Bouman, Paul C. and Kontogiannis, Spyros C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ATMOS.2024.10},
  URN =		{urn:nbn:de:0030-drops-211983},
  doi =		{10.4230/OASIcs.ATMOS.2024.10},
  annote =	{Keywords: periodic timetabling, regenerative braking}
}
Document
Random-Order Online Independent Set of Intervals and Hyperrectangles

Authors: Mohit Garg, Debajyoti Kar, and Arindam Khan

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In the Maximum Independent Set of Hyperrectangles problem, we are given a set of n (possibly overlapping) d-dimensional axis-aligned hyperrectangles, and the goal is to find a subset of non-overlapping hyperrectangles of maximum cardinality. For d = 1, this corresponds to the classical Interval Scheduling problem, where a simple greedy algorithm returns an optimal solution. In the offline setting, for d-dimensional hyperrectangles, polynomial time (log n)^{O(d)}-approximation algorithms are known [Chalermsook and Chuzhoy, 2009]. However, the problem becomes notably challenging in the online setting, where the input objects (hyperrectangles) appear one by one in an adversarial order, and on the arrival of an object, the algorithm needs to make an immediate and irrevocable decision whether or not to select the object while maintaining the feasibility. Even for interval scheduling, an Ω(n) lower bound is known on the competitive ratio. To circumvent these negative results, in this work, we study the online maximum independent set of axis-aligned hyperrectangles in the random-order arrival model, where the adversary specifies the set of input objects which then arrive in a uniformly random order. Starting from the prototypical secretary problem, the random-order model has received significant attention to study algorithms beyond the worst-case competitive analysis (see the survey by Gupta and Singla [Anupam Gupta and Sahil Singla, 2020]). Surprisingly, we show that the problem in the random-order model almost matches the best-known offline approximation guarantees, up to polylogarithmic factors. In particular, we give a simple (log n)^{O(d)}-competitive algorithm for d-dimensional hyperrectangles in this model, which runs in O_d̃(n) time. Our approach also yields (log n)^{O(d)}-competitive algorithms in the random-order model for more general objects such as d-dimensional fat objects and ellipsoids. Furthermore, all our competitiveness guarantees hold with high probability, and not just in expectation.

Cite as

Mohit Garg, Debajyoti Kar, and Arindam Khan. Random-Order Online Independent Set of Intervals and Hyperrectangles. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 58:1-58:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.ESA.2024.58,
  author =	{Garg, Mohit and Kar, Debajyoti and Khan, Arindam},
  title =	{{Random-Order Online Independent Set of Intervals and Hyperrectangles}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{58:1--58:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.58},
  URN =		{urn:nbn:de:0030-drops-211298},
  doi =		{10.4230/LIPIcs.ESA.2024.58},
  annote =	{Keywords: Online Algorithms, Random-Order Model, Maximum Independent Set of Rectangles, Hyperrectangles, Fat Objects, Interval Scheduling}
}
Document
Optimizing Throughput and Makespan of Queuing Systems by Information Design

Authors: Svenja M. Griesbach, Max Klimm, Philipp Warode, and Theresa Ziemke

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We study the optimal provision of information for two natural performance measures of queuing systems: throughput and makespan. A set of parallel links (queues) is equipped with deterministic capacities and stochastic offsets where the latter depend on a realized state, and the number of states is assumed to be constant. A continuum of flow particles (agents) arrives at the system at a constant rate. A system operator knows the realization of the state and may (partially) reveal this information via a public signaling scheme to the flow particles. Upon arrival, the flow particles observe the signal issued by the system operator, form an updated belief about the realized state, and decide on which link they use. Inflow into a link exceeding the link’s capacity builds up in a queue that increases the cost (total travel time) on the link. Dynamic inflow rates are in a Bayesian dynamic equilibrium when the expected cost along all links with positive inflow is equal at every point in time and not larger than the expected cost of any unused link. For a given time horizon T, the throughput induced by a signaling scheme is the total volume of flow that leaves the links in the interval [0,T]. The public signaling scheme maximizing the throughput may involve irrational numbers. We provide an additive polynomial time approximation scheme (PTAS) that approximates the optimal throughput by an arbitrary additive constant ε > 0. The algorithm solves a Lagrangian dual of the signaling problem with the Ellipsoid method whose separation oracle is implemented by a cell decomposition technique. We also provide a multiplicative fully polynomial time approximation scheme (FPTAS) that does not rely on strong duality and, thus, allows to compute the optimal signals. It uses a different cell decomposition technique together with a piecewise convex under-estimator of the optimal value function. Finally, we consider the makespan of a Bayesian dynamic equilibrium which is defined as the last point in time when a total given value of flow leaves the system. Using a variational inequality argument, we show that full information revelation is a public signaling scheme that minimizes the makespan.

Cite as

Svenja M. Griesbach, Max Klimm, Philipp Warode, and Theresa Ziemke. Optimizing Throughput and Makespan of Queuing Systems by Information Design. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 62:1-62:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{griesbach_et_al:LIPIcs.ESA.2024.62,
  author =	{Griesbach, Svenja M. and Klimm, Max and Warode, Philipp and Ziemke, Theresa},
  title =	{{Optimizing Throughput and Makespan of Queuing Systems by Information Design}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{62:1--62:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.62},
  URN =		{urn:nbn:de:0030-drops-211336},
  doi =		{10.4230/LIPIcs.ESA.2024.62},
  annote =	{Keywords: Information Design, Dynamic Flows, Public Signals, Convex Envelope}
}
Document
Many-To-Many Polygon Matching à La Jaccard

Authors: Alexander Naumann, Annika Bonerath, and Jan-Henrik Haunert

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
Integration of spatial data is a major field of research. An important task of data integration is finding correspondences between entities. Here, we focus on combining building footprint data from cadastre and from volunteered geographic information, in particular OpenStreetMap. Previous research on this topic has led to exact 1:1 matching approaches and heuristic m:n matching approaches, most of which are lacking a mathematical problem definition. We introduce a model for many-to-many polygon matching based on the well-established Jaccard index. This is a natural extension to the existing 1:1 matching approaches. We show that the problem is NP-complete and a naive approach via integer programming fails easily. By analyzing the structure of the problem in detail, we can reduce the number of variables significantly. This approach yields an optimal m:n matching even for large real-world instances with appropriate running time. In particular, for the set of all building footprints of the city of Bonn (119,300 / 97,284 polygons) it yielded an optimal solution in approximately 1 hour.

Cite as

Alexander Naumann, Annika Bonerath, and Jan-Henrik Haunert. Many-To-Many Polygon Matching à La Jaccard. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 90:1-90:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{naumann_et_al:LIPIcs.ESA.2024.90,
  author =	{Naumann, Alexander and Bonerath, Annika and Haunert, Jan-Henrik},
  title =	{{Many-To-Many Polygon Matching \`{a} La Jaccard}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{90:1--90:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.90},
  URN =		{urn:nbn:de:0030-drops-211614},
  doi =		{10.4230/LIPIcs.ESA.2024.90},
  annote =	{Keywords: polygon matching, exact algorithm, Jaccard index}
}
Document
Cornucopia: Distributed Randomness at Scale

Authors: Miranda Christ, Kevin Choi, and Joseph Bonneau

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
We propose Cornucopia, a protocol framework for distributed randomness beacons combining accumulators and verifiable delay functions. Cornucopia generalizes the Unicorn protocol, using an accumulator to enable efficient verification by each participant that their contribution has been included. The output is unpredictable as long as at least one participant is honest, yielding a scalable distributed randomness beacon with strong security properties. Proving this approach secure requires developing a novel property of accumulators, insertion security, which we show is both necessary and sufficient for Cornucopia-style protocols. We show that not all accumulators are insertion-secure, then prove that common constructions (Merkle trees, RSA accumulators, and bilinear accumulators) are either naturally insertion-secure or can be made so with trivial modifications.

Cite as

Miranda Christ, Kevin Choi, and Joseph Bonneau. Cornucopia: Distributed Randomness at Scale. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 17:1-17:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{christ_et_al:LIPIcs.AFT.2024.17,
  author =	{Christ, Miranda and Choi, Kevin and Bonneau, Joseph},
  title =	{{Cornucopia: Distributed Randomness at Scale}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{17:1--17:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.17},
  URN =		{urn:nbn:de:0030-drops-209533},
  doi =		{10.4230/LIPIcs.AFT.2024.17},
  annote =	{Keywords: Randomness beacons, accumulators}
}
Document
A Shortfall in Investor Expectations of Leveraged Tokens

Authors: Reza Rahimian and Jeremy Clark

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Leveraged tokens (LVTs) are emerging crypto-assets primarily issued by centralized exchanges. The concept is borrowed from leveraged ETFs (LETFs) in traditional financial markets, which offer higher gains (and higher losses) relative to price movements in the underlying asset. Leverage is commonly used by short-term traders to amplify returns from daily market shifts. However, LVTs have been implemented differently from LETFs by exchanges in the crypto market, with variations across platforms. We examine the mechanics and constituent components of LVTs, demonstrating that the lack of a standard has resulted in deficiencies and unexpected technical and economic outcomes. To identify existing problems, we analyze more than 1,600 leveraged tokens from 10 issuers. Our analysis reveals that 99.9% of LVTs are centralized, with 80% lacking blockchain interaction, leading to transparency issues. Total supply information is difficult to access for 53% of them, and 41% appear inadequately backed at launch. Additionally, 97% of LVTs are vulnerable to front-running during well-known events, and they deviate from their stated leverage ratios more than LETFs, partly due to inconsistent re-leveraging processes and higher management fees. This work provides a framework for crypto investors, blockchain developers, and data analysts to gain a deep understanding of leveraged tokens and their impact on market dynamics, liquidity, and price movements. It also offers insights for crypto exchanges and auditors into the internal functionalities and financial performance of LVTs under varying market conditions.

Cite as

Reza Rahimian and Jeremy Clark. A Shortfall in Investor Expectations of Leveraged Tokens. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 23:1-23:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{rahimian_et_al:LIPIcs.AFT.2024.23,
  author =	{Rahimian, Reza and Clark, Jeremy},
  title =	{{A Shortfall in Investor Expectations of Leveraged Tokens}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{23:1--23:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.23},
  URN =		{urn:nbn:de:0030-drops-209599},
  doi =		{10.4230/LIPIcs.AFT.2024.23},
  annote =	{Keywords: crypto-assets, ethereum, leverage, derivatives}
}
Document
APPROX
Scheduling Splittable Jobs on Configurable Machines

Authors: Matthew Casey, Rajmohan Rajaraman, David Stalfa, and Cheng Tan

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
Motivated by modern architectures allowing for the partitioning of a GPU into hardware separated instances, we initiate the study of scheduling splittable jobs on configurable machines. We consider machines that can be configured into smaller instances, which we call blocks, in multiple ways, each of which is referred to as a configuration. We introduce the Configurable Machine Scheduling (cms) problem, where we are given n jobs and a set C of configurations. A schedule consists of a set of machines, each assigned some configuration in C with each block in the configuration assigned to process one job. The amount of a job’s demand that is satisfied by a block is given by an arbitrary function of the job and block. The objective is to construct a schedule using as few machines as possible. We provide a tight logarithmic factor approximation algorithm for this problem in the general setting, a factor (3 + ε) approximation algorithm for arbitrary ε > 0 when there are O(1) input configurations, and a polynomial time approximation scheme when both the number and size of configurations are O(1). Finally, we utilize a technique for finding conic integer combinations in fixed dimension to develop an optimal polynomial time algorithm in the case with O(1) jobs, O(1) blocks, and every configuration up to a given size.

Cite as

Matthew Casey, Rajmohan Rajaraman, David Stalfa, and Cheng Tan. Scheduling Splittable Jobs on Configurable Machines. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 22:1-22:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{casey_et_al:LIPIcs.APPROX/RANDOM.2024.22,
  author =	{Casey, Matthew and Rajaraman, Rajmohan and Stalfa, David and Tan, Cheng},
  title =	{{Scheduling Splittable Jobs on Configurable Machines}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{22:1--22:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.22},
  URN =		{urn:nbn:de:0030-drops-210157},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.22},
  annote =	{Keywords: Scheduling algorithms, Approximation algorithms, Configurable machines, Splittable jobs, Linear programming}
}
Document
APPROX
Learning-Augmented Maximum Independent Set

Authors: Vladimir Braverman, Prathamesh Dharangutte, Vihan Shah, and Chen Wang

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We study the Maximum Independent Set (MIS) problem on general graphs within the framework of learning-augmented algorithms. The MIS problem is known to be NP-hard and is also NP-hard to approximate to within a factor of n^(1-δ) for any δ > 0. We show that we can break this barrier in the presence of an oracle obtained through predictions from a machine learning model that answers vertex membership queries for a fixed MIS with probability 1/2+ε. In the first setting we consider, the oracle can be queried once per vertex to know if a vertex belongs to a fixed MIS, and the oracle returns the correct answer with probability 1/2 + ε. Under this setting, we show an algorithm that obtains an Õ((√Δ)/ε)-approximation in O(m) time where Δ is the maximum degree of the graph. In the second setting, we allow multiple queries to the oracle for a vertex, each of which is correct with probability 1/2 + ε. For this setting, we show an O(1)-approximation algorithm using O(n/ε²) total queries and Õ(m) runtime.

Cite as

Vladimir Braverman, Prathamesh Dharangutte, Vihan Shah, and Chen Wang. Learning-Augmented Maximum Independent Set. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 24:1-24:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.APPROX/RANDOM.2024.24,
  author =	{Braverman, Vladimir and Dharangutte, Prathamesh and Shah, Vihan and Wang, Chen},
  title =	{{Learning-Augmented Maximum Independent Set}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{24:1--24:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.24},
  URN =		{urn:nbn:de:0030-drops-210179},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.24},
  annote =	{Keywords: Learning-augmented algorithms, maximum independent set, graph algorithms}
}
Document
RANDOM
Approximating the Number of Relevant Variables in a Parity Implies Proper Learning

Authors: Nader H. Bshouty and George Haddad

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
Consider the model where we can access a parity function through random uniform labeled examples in the presence of random classification noise. In this paper, we show that approximating the number of relevant variables in the parity function is as hard as properly learning parities. More specifically, let γ:ℝ^+ → ℝ^+, where γ(x) ≥ x, be any strictly increasing function. In our first result, we show that from any polynomial-time algorithm that returns a γ-approximation, D (i.e., γ^{-1}(d(f)) ≤ D ≤ γ(d(f))), of the number of relevant variables d(f) for any parity f, we can, in polynomial time, construct a solution to the long-standing open problem of polynomial-time learning k(n)-sparse parities (parities with k(n) ≤ n relevant variables), where k(n) = ω_n(1). In our second result, we show that from any T(n)-time algorithm that, for any parity f, returns a γ-approximation of the number of relevant variables d(f) of f, we can, in polynomial time, construct a poly(Γ(n))T(Γ(n)²)-time algorithm that properly learns parities, where Γ(x) = γ(γ(x)). If T(Γ(n)²) = exp({o(n/log n)}), this would resolve another long-standing open problem of properly learning parities in the presence of random classification noise in time exp(o(n/log n)).

Cite as

Nader H. Bshouty and George Haddad. Approximating the Number of Relevant Variables in a Parity Implies Proper Learning. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 38:1-38:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bshouty_et_al:LIPIcs.APPROX/RANDOM.2024.38,
  author =	{Bshouty, Nader H. and Haddad, George},
  title =	{{Approximating the Number of Relevant Variables in a Parity Implies Proper Learning}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{38:1--38:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.38},
  URN =		{urn:nbn:de:0030-drops-210316},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.38},
  annote =	{Keywords: PAC Learning, Random Classification Noise, Uniform Distribution, Parity, Sparcity Approximation}
}
Document
Cross Ledger Transaction Consistency for Financial Auditing

Authors: Vlasis Koutsos, Xiangan Tian, Dimitrios Papadopoulos, and Dimitris Chatzopoulos

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Auditing throughout a fiscal year is integral to organizations with transactional activity. Organizations transact with each other and record the details for all their economical activities so that a regulatory committee can verify the lawfulness and legitimacy of their activity. However, it is computationally infeasible for the committee to perform all necessary checks for each organization. To overcome this, auditors assist in this process: organizations give access to all their internal data to their auditors, who then produce reports regarding the consistency of the organization’s data, alerting the committee to any inconsistencies. Despite this, numerous issues that result in fines annually revolve around such inconsistencies in bookkeeping across organizations. Notably, committees wishing to verify the correctness of auditor-provided reports need to redo all their calculations; a process which is computationally proportional to the number of organizations. In fact, it becomes prohibitive when considering real-world settings with thousands of organizations. In this work, we propose two protocols, CLOSC and CLOLC, whose goals are to enable auditors and a committee to verify the consistency of transactions across different ledgers. Both protocols ensure that for every transaction recorded in an organization’s ledger, there exists a dual one in the ledger of another organization while safeguarding against other potential attacks. Importantly, we minimize the information leakage to auditors and other organizations and guarantee three crucial security and privacy properties that we propose: (i) transaction amount privacy, (ii) organization-auditor unlinkability, and (iii) transacting organizations unlinkability. At the core of our protocols lies a two-tier ledger architecture alongside a suite of cryptographic tools. To demonstrate the practicality and scalability of our designs, we provide extensive performance evaluation for both CLOSC and CLOLC. Our numbers are promising, i.e., all computation and verification times lie in the range of seconds, even for millions of transactions, while the on-chain storage costs for an auditing epoch are encouraging i.e. in the range of GB for millions of transactions and thousands of organizations.

Cite as

Vlasis Koutsos, Xiangan Tian, Dimitrios Papadopoulos, and Dimitris Chatzopoulos. Cross Ledger Transaction Consistency for Financial Auditing. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 4:1-4:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{koutsos_et_al:LIPIcs.AFT.2024.4,
  author =	{Koutsos, Vlasis and Tian, Xiangan and Papadopoulos, Dimitrios and Chatzopoulos, Dimitris},
  title =	{{Cross Ledger Transaction Consistency for Financial Auditing}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{4:1--4:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.4},
  URN =		{urn:nbn:de:0030-drops-209409},
  doi =		{10.4230/LIPIcs.AFT.2024.4},
  annote =	{Keywords: Financial auditing, Two-tier ledger architecture, Smart contracts, Transaction privacy, Financial entity unlinkability}
}
Document
Proof of Diligence: Cryptoeconomic Security for Rollups

Authors: Peiyao Sheng, Ranvir Rana, Senthil Bala, Himanshu Tyagi, and Pramod Viswanath

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Layer 1 (L1) blockchains such as Ethereum are secured under an "honest supermajority of stake" assumption for a large pool of validators who verify each and every transaction on it. This high security comes at a scalability cost which not only effects the throughput of the blockchain but also results in high gas fees for executing transactions on chain. The most successful solution for this problem is provided by optimistic rollups, Layer 2 (L2) blockchains that execute transactions outside L1 but post the transaction data on L1. The security for such L2 chains is argued, informally, under the assumption that a set of nodes will check the transaction data posted on L1 and raise an alarm (a fraud proof) if faulty transactions are detected. However, all current deployments lack a proper incentive mechanism for ensuring that these nodes will do their job "diligently", and simply rely on a cursory incentive alignment argument for security. We solve this problem by introducing an incentivized watchtower network designed to serve as the first line of defense for rollups. Our main contribution is a "Proof of Diligence" protocol that requires watchtowers to continuously provide a proof that they have verified L2 assertions and get rewarded for the same. Proof of Diligence protocol includes a carefully-designed incentive mechanism that is provably secure when watchtowers are rational actors, under a mild rational independence assumption. Our proposed system is now live on Ethereum testnet. We deployed a watchtower network and implemented Proof of Diligence for multiple optimistic rollups. We extract execution as well as inclusion proofs for transactions as a part of the bounty. Each watchtower has minimal additional computational overhead beyond access to standard L1 and L2 RPC nodes. Our watchtower network comprises of 10 different (rationally independent) EigenLayer operators, secured using restaked Ethereum and spread across three different continents, watching two different optimistic rollups for Ethereum, providing them a decentralized and trustfree first line of defense. The watchtower network can be configured to watch the batches committed by sequencer on L1, providing an approximately 3 minute (cryptoeconomically secure) finality since the additional overhead for watching is very low. This is much lower than the finality delay in the current setup where it takes about 45 minutes for state assertions on L1, and hence will not delay the finality process on L1.

Cite as

Peiyao Sheng, Ranvir Rana, Senthil Bala, Himanshu Tyagi, and Pramod Viswanath. Proof of Diligence: Cryptoeconomic Security for Rollups. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 5:1-5:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{sheng_et_al:LIPIcs.AFT.2024.5,
  author =	{Sheng, Peiyao and Rana, Ranvir and Bala, Senthil and Tyagi, Himanshu and Viswanath, Pramod},
  title =	{{Proof of Diligence: Cryptoeconomic Security for Rollups}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{5:1--5:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.5},
  URN =		{urn:nbn:de:0030-drops-209417},
  doi =		{10.4230/LIPIcs.AFT.2024.5},
  annote =	{Keywords: blockchain, rollup, game theory, security}
}
Document
Qafny: A Quantum-Program Verifier

Authors: Liyi Li, Mingwei Zhu, Rance Cleaveland, Alexander Nicolellis, Yi Lee, Le Chang, and Xiaodi Wu

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Because of the probabilistic/nondeterministic behavior of quantum programs, it is highly advisable to verify them formally to ensure that they correctly implement their specifications. Formal verification, however, also traditionally requires significant effort. To address this challenge, we present Qafny, an automated proof system based on the program verifier Dafny and designed for verifying quantum programs. At its core, Qafny uses a type-guided quantum proof system that translates quantum operations to classical array operations modeled within a classical separation logic framework. We prove the soundness and completeness of our proof system and implement a prototype compiler that transforms Qafny programs and specifications into Dafny for automated verification purposes. We then illustrate the utility of Qafny’s automated capabilities in efficiently verifying important quantum algorithms, including quantum-walk algorithms, Grover’s algorithm, and Shor’s algorithm.

Cite as

Liyi Li, Mingwei Zhu, Rance Cleaveland, Alexander Nicolellis, Yi Lee, Le Chang, and Xiaodi Wu. Qafny: A Quantum-Program Verifier. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 24:1-24:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ECOOP.2024.24,
  author =	{Li, Liyi and Zhu, Mingwei and Cleaveland, Rance and Nicolellis, Alexander and Lee, Yi and Chang, Le and Wu, Xiaodi},
  title =	{{Qafny: A Quantum-Program Verifier}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{24:1--24:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.24},
  URN =		{urn:nbn:de:0030-drops-208735},
  doi =		{10.4230/LIPIcs.ECOOP.2024.24},
  annote =	{Keywords: Quantum Computing, Automated Verification, Separation Logic}
}
Document
Java Bytecode Normalization for Code Similarity Analysis

Authors: Stefan Schott, Serena Elisa Ponta, Wolfram Fischer, Jonas Klauke, and Eric Bodden

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Analyzing the similarity of two code fragments has many applications, including code clone, vulnerability and plagiarism detection. Most existing approaches for similarity analysis work on source code. However, in scenarios like plagiarism detection, copyright violation detection or Software Bill of Materials creation source code is often not available and thus similarity analysis has to be performed on binary formats. Java bytecode is a binary format executable by the Java Virtual Machine and obtained from the compilation of Java source code. Performing similarity detection on bytecode is challenging because different compilers can compile the same source code to syntactically vastly different bytecode. In this work we assess to what extent one can nonetheless enable similarity detection by bytecode normalization, a procedure to transform Java bytecode into a representation that is identical for the same original source code, irrespective of the Java compiler and Java version used during compilation. Our manual study revealed 16 classes of compilation differences that various compilation environments may induce. Based on these findings, we implemented bytecode normalization in a tool jNorm. It uses Jimple as intermediate representation, applies common code optimizations and transforms all classes of compilation difference to a normalized form, thus achieving a representation of the bytecode that is identical despite different compilation environments. Our evaluation, performed on more than 300 popular Java projects, shows that solely the act of incrementing a compiler version may cause differences in 46% of all resulting bytecode files. By applying bytecode normalization, one can remove more than 99% of these differences, thus acting as a crucial enabler for subsequent applications of bytecode similarity analysis.

Cite as

Stefan Schott, Serena Elisa Ponta, Wolfram Fischer, Jonas Klauke, and Eric Bodden. Java Bytecode Normalization for Code Similarity Analysis. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 37:1-37:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{schott_et_al:LIPIcs.ECOOP.2024.37,
  author =	{Schott, Stefan and Ponta, Serena Elisa and Fischer, Wolfram and Klauke, Jonas and Bodden, Eric},
  title =	{{Java Bytecode Normalization for Code Similarity Analysis}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{37:1--37:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.37},
  URN =		{urn:nbn:de:0030-drops-208865},
  doi =		{10.4230/LIPIcs.ECOOP.2024.37},
  annote =	{Keywords: Bytecode, Java Compiler, Code Similarity Analysis}
}
  • Refine by Author
  • 4 Li, Xin
  • 4 Zheng, Yu
  • 3 Cheng, Kuan
  • 3 Hadzilacos, Vassos
  • 3 Toueg, Sam
  • Show More...

  • Refine by Classification
  • 3 Theory of computation → Error-correcting codes
  • 3 Theory of computation → Lower bounds and information complexity
  • 3 Theory of computation → Scheduling algorithms
  • 2 Applied computing → Transportation
  • 2 Computer systems organization → Real-time system architecture
  • Show More...

  • Refine by Keyword
  • 2 Asynchronous System
  • 2 Edit Distance
  • 2 Genome assembly
  • 2 Longest Common Subsequence
  • 2 Real-Time Systems
  • Show More...

  • Refine by Type
  • 42 document

  • Refine by Publication Year
  • 32 2024
  • 3 2017
  • 3 2021
  • 3 2023
  • 1 2020

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail