14 Search Results for "Yang, Nan"


Document
Open the Chests: An Environment for Activity Recognition and Sequential Decision Problems Using Temporal Logic

Authors: Ivelina Stoyanova, Nicolas Museux, Sao Mai Nguyen, and David Filliat

Published in: LIPIcs, Volume 318, 31st International Symposium on Temporal Representation and Reasoning (TIME 2024)


Abstract
This article presents Open the Chests, a novel benchmark environment designed for simulating and testing activity recognition and reactive decision-making algorithms. By leveraging temporal logic, Open the Chests offers a dynamic, event-driven simulation platform that illustrates the complexities of real-world systems. The environment contains multiple chests, each representing an activity pattern that an interacting agent must identify and respond to by pressing a corresponding button. The agent must analyze sequences of asynchronous events generated by the environment to recognize these patterns and make informed decisions. With the aim of theoretically grounding the environment, the Activity-Based Markov Decision Process (AB-MDP) is defined, allowing to model the context-dependent interaction with activities. Our goal is to propose a robust tool for the development, testing, and bench-marking of algorithms that is illustrative of realistic scenarios and allows for the isolation of specific complexities in event-driven environments.

Cite as

Ivelina Stoyanova, Nicolas Museux, Sao Mai Nguyen, and David Filliat. Open the Chests: An Environment for Activity Recognition and Sequential Decision Problems Using Temporal Logic. In 31st International Symposium on Temporal Representation and Reasoning (TIME 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 318, pp. 5:1-5:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{stoyanova_et_al:LIPIcs.TIME.2024.5,
  author =	{Stoyanova, Ivelina and Museux, Nicolas and Nguyen, Sao Mai and Filliat, David},
  title =	{{Open the Chests: An Environment for Activity Recognition and Sequential Decision Problems Using Temporal Logic}},
  booktitle =	{31st International Symposium on Temporal Representation and Reasoning (TIME 2024)},
  pages =	{5:1--5:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-349-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{318},
  editor =	{Sala, Pietro and Sioutis, Michael and Wang, Fusheng},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TIME.2024.5},
  URN =		{urn:nbn:de:0030-drops-212128},
  doi =		{10.4230/LIPIcs.TIME.2024.5},
  annote =	{Keywords: Event-Based Decision Making, Activity Recognition, Temporal Logic, Reinforcement Learning, Dynamic Systems, Complex Event Processing, Benchmark Environment, Real-Time Simulation}
}
Document
A Bayesian Rolling Horizon Approach for Rolling Stock Rotation Planning with Predictive Maintenance

Authors: Felix Prause and Ralf Borndörfer

Published in: OASIcs, Volume 123, 24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024)


Abstract
We consider the rolling stock rotation planning problem with predictive maintenance (RSRP-PdM), where a timetable given by a set of trips must be operated by a fleet of vehicles. Here, the health states of the vehicles are assumed to be random variables, and their maintenance schedule should be planned based on their predicted failure probabilities. Utilizing the Bayesian update step of the Kalman filter, we develop a rolling horizon approach for RSRP-PdM, in which the predicted health state distributions are updated as new data become available. This approach reduces the uncertainty of the health states and thus improves the decision-making basis for maintenance planning. To solve the instances, we employ a local neighborhood search, which is a modification of a heuristic for RSRP-PdM, and demonstrate its effectiveness. Using this solution algorithm, the presented approach is compared with the results of common maintenance strategies on test instances derived from real-world timetables. The obtained results show the benefits of the rolling horizon approach.

Cite as

Felix Prause and Ralf Borndörfer. A Bayesian Rolling Horizon Approach for Rolling Stock Rotation Planning with Predictive Maintenance. In 24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024). Open Access Series in Informatics (OASIcs), Volume 123, pp. 13:1-13:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{prause_et_al:OASIcs.ATMOS.2024.13,
  author =	{Prause, Felix and Bornd\"{o}rfer, Ralf},
  title =	{{A Bayesian Rolling Horizon Approach for Rolling Stock Rotation Planning with Predictive Maintenance}},
  booktitle =	{24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024)},
  pages =	{13:1--13:19},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-350-8},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{123},
  editor =	{Bouman, Paul C. and Kontogiannis, Spyros C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ATMOS.2024.13},
  URN =		{urn:nbn:de:0030-drops-212013},
  doi =		{10.4230/OASIcs.ATMOS.2024.13},
  annote =	{Keywords: Rolling stock rotation planning, Predictive maintenance, Rolling horizon approach, Bayesian inference, Local neighborhood search}
}
Document
DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance

Authors: Rundong Gan, Liyi Zhou, Le Wang, Kaihua Qin, and Xiaodong Lin

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Decentralized Finance (DeFi) has witnessed a monumental surge, reaching 53.039 billion USD in total value locked. As this sector continues to expand, ensuring the reliability of DeFi smart contracts becomes increasingly crucial. While some users are adept at reading code or the compiled bytecode to understand smart contracts, many rely on documentation. Therefore, discrepancies between the documentation and the deployed code can pose significant risks, whether these discrepancies are due to errors or intentional fraud. To tackle these challenges, we developed DeFiAligner, an end-to-end system to identify inconsistencies between documentation and smart contracts. DeFiAligner incorporates a symbolic execution tool, SEVM, which explores execution paths of on-chain binary code, recording memory and stack states. It automatically generates symbolic expressions for token balance changes and branch conditions, which, along with related project documents, are processed by LLMs. Using structured prompts, the LLMs evaluate the alignment between the symbolic expressions and the documentation. Our tests across three distinct scenarios demonstrate DeFiAligner’s capability to automate inconsistency detection in DeFi, achieving recall rates of 92% and 90% on two public datasets respectively.

Cite as

Rundong Gan, Liyi Zhou, Le Wang, Kaihua Qin, and Xiaodong Lin. DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 7:1-7:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gan_et_al:LIPIcs.AFT.2024.7,
  author =	{Gan, Rundong and Zhou, Liyi and Wang, Le and Qin, Kaihua and Lin, Xiaodong},
  title =	{{DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{7:1--7:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.7},
  URN =		{urn:nbn:de:0030-drops-209431},
  doi =		{10.4230/LIPIcs.AFT.2024.7},
  annote =	{Keywords: Decentralized Finance Security, Large Language Models, Project Review, Symbolic Analysis, Smart Contracts}
}
Document
SoK: Zero-Knowledge Range Proofs

Authors: Miranda Christ, Foteini Baldimtsi, Konstantinos Kryptos Chalkias, Deepak Maram, Arnab Roy, and Joy Wang

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Zero-knowledge range proofs (ZKRPs) allow a prover to convince a verifier that a secret value lies in a given interval. ZKRPs have numerous applications: from anonymous credentials and auctions, to confidential transactions in cryptocurrencies. At the same time, a plethora of ZKRP constructions exist in the literature, each with its own trade-offs. In this work, we systematize the knowledge around ZKRPs. We create a classification of existing constructions based on the underlying building techniques, and we summarize their properties. We provide comparisons between schemes both in terms of properties as well as efficiency levels, and construct a guideline to assist in the selection of an appropriate ZKRP for different application requirements. Finally, we discuss a number of interesting open research problems.

Cite as

Miranda Christ, Foteini Baldimtsi, Konstantinos Kryptos Chalkias, Deepak Maram, Arnab Roy, and Joy Wang. SoK: Zero-Knowledge Range Proofs. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 14:1-14:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{christ_et_al:LIPIcs.AFT.2024.14,
  author =	{Christ, Miranda and Baldimtsi, Foteini and Chalkias, Konstantinos Kryptos and Maram, Deepak and Roy, Arnab and Wang, Joy},
  title =	{{SoK: Zero-Knowledge Range Proofs}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{14:1--14:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.14},
  URN =		{urn:nbn:de:0030-drops-209504},
  doi =		{10.4230/LIPIcs.AFT.2024.14},
  annote =	{Keywords: Range proofs, zero knowledge}
}
Document
Learning Gradual Typing Performance

Authors: Mohammad Wahiduzzaman Khan, Sheng Chen, and Yi He

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Gradual typing has emerged as a promising typing discipline for reconciling static and dynamic typing, which have respective strengths and shortcomings. Thanks to its promises, gradual typing has gained tremendous momentum in both industry and academia. A main challenge in gradual typing is that, however, the performance of its programs can often be unpredictable, and adding or removing the type of a a single parameter may lead to wild performance swings. Many approaches have been proposed to optimize gradual typing performance, but little work has been done to aid the understanding of the performance landscape of gradual typing and navigating the migration process (which adds type annotations to make programs more static) to avert performance slowdowns. Motivated by this situation, this work develops a machine-learning-based approach to predict the performance of each possible way of adding type annotations to a program. On top of that, many supports for program migrations could be developed, such as finding the most performant neighbor of any given configuration. Our approach gauges runtime overheads of dynamic type checks inserted by gradual typing and uses that information to train a machine learning model, which is used to predict the running time of gradual programs. We have evaluated our approach on 12 Python benchmarks for both guarded and transient semantics. For guarded semantics, our evaluation results indicate that with only 40 training instances generated from each benchmark, the predicted times for all other instances differ on average by 4% from the measured times. For transient semantics, the time difference ratio is higher but the time difference is often within 0.1 seconds.

Cite as

Mohammad Wahiduzzaman Khan, Sheng Chen, and Yi He. Learning Gradual Typing Performance. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 21:1-21:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{khan_et_al:LIPIcs.ECOOP.2024.21,
  author =	{Khan, Mohammad Wahiduzzaman and Chen, Sheng and He, Yi},
  title =	{{Learning Gradual Typing Performance}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{21:1--21:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.21},
  URN =		{urn:nbn:de:0030-drops-208706},
  doi =		{10.4230/LIPIcs.ECOOP.2024.21},
  annote =	{Keywords: Gradual typing performance, type migration, performance prediction, machine learning}
}
Document
Combining Constraint Programming Reasoning with Large Language Model Predictions

Authors: Florian Régin, Elisabetta De Maria, and Alexandre Bonlarron

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
Constraint Programming (CP) and Machine Learning (ML) face challenges in text generation due to CP’s struggle with implementing "meaning" and ML’s difficulty with structural constraints. This paper proposes a solution by combining both approaches and embedding a Large Language Model (LLM) in CP. The LLM handles word generation and meaning, while CP manages structural constraints. This approach builds on GenCP, an improved version of On-the-fly Constraint Programming Search (OTFS) using LLM-generated domains. Compared to Beam Search (BS), a standard NLP method, this combined approach (GenCP with LLM) is faster and produces better results, ensuring all constraints are satisfied. This fusion of CP and ML presents new possibilities for enhancing text generation under constraints.

Cite as

Florian Régin, Elisabetta De Maria, and Alexandre Bonlarron. Combining Constraint Programming Reasoning with Large Language Model Predictions. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 25:1-25:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{regin_et_al:LIPIcs.CP.2024.25,
  author =	{R\'{e}gin, Florian and De Maria, Elisabetta and Bonlarron, Alexandre},
  title =	{{Combining Constraint Programming Reasoning with Large Language Model Predictions}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{25:1--25:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.25},
  URN =		{urn:nbn:de:0030-drops-207109},
  doi =		{10.4230/LIPIcs.CP.2024.25},
  annote =	{Keywords: Solver and Tools, ML-augmented CP, Constrained Text Generation, ML alongside CO}
}
Document
On the Complexity of the Median and Closest Permutation Problems

Authors: Luís Cunha, Ignasi Sau, and Uéverton Souza

Published in: LIPIcs, Volume 312, 24th International Workshop on Algorithms in Bioinformatics (WABI 2024)


Abstract
Genome rearrangements are events where large blocks of DNA exchange places during evolution. The analysis of these events is a promising tool for understanding evolutionary genomics, providing data for phylogenetic reconstruction based on genome rearrangement measures. Many pairwise rearrangement distances have been proposed, based on finding the minimum number of rearrangement events to transform one genome into the other, using some predefined operation. When more than two genomes are considered, we have the more challenging problem of rearrangement-based phylogeny reconstruction. Given a set of genomes and a distance notion, there are at least two natural ways to define the "target" genome. On the one hand, finding a genome that minimizes the sum of the distances from this to any other, called the median genome. On the other hand, finding a genome that minimizes the maximum distance to any other, called the closest genome. Considering genomes as permutations of distinct integers, some distance metrics have been extensively studied. We investigate the median and closest problems on permutations over the following metrics: breakpoint distance, swap distance, block-interchange distance, short-block-move distance, and transposition distance. In biological applications some values are usually very small, such as the solution value d or the number k of input permutations. For each of these metrics and parameters d or k, we analyze the closest and the median problems from the viewpoint of parameterized complexity. We obtain the following results: NP-hardness for finding the median/closest permutation regarding some metrics of distance, even for only k = 3 permutations; Polynomial kernels for the problems of finding the median permutation of all studied metrics, considering the target distance d as parameter; NP-hardness result for finding the closest permutation by short-block-moves; FPT algorithms and infeasibility of polynomial kernels for finding the closest permutation for some metrics when parameterized by the target distance d.

Cite as

Luís Cunha, Ignasi Sau, and Uéverton Souza. On the Complexity of the Median and Closest Permutation Problems. In 24th International Workshop on Algorithms in Bioinformatics (WABI 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 312, pp. 2:1-2:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cunha_et_al:LIPIcs.WABI.2024.2,
  author =	{Cunha, Lu{\'\i}s and Sau, Ignasi and Souza, U\'{e}verton},
  title =	{{On the Complexity of the Median and Closest Permutation Problems}},
  booktitle =	{24th International Workshop on Algorithms in Bioinformatics (WABI 2024)},
  pages =	{2:1--2:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-340-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{312},
  editor =	{Pissis, Solon P. and Sung, Wing-Kin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2024.2},
  URN =		{urn:nbn:de:0030-drops-206468},
  doi =		{10.4230/LIPIcs.WABI.2024.2},
  annote =	{Keywords: Median problem, Closest problem, Genome rearrangements, Parameterized complexity}
}
Document
The Computational Advantage of MIP^∗ Vanishes in the Presence of Noise

Authors: Yangjing Dong, Honghao Fu, Anand Natarajan, Minglong Qin, Haochen Xu, and Penghui Yao

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
The class MIP^* of quantum multiprover interactive proof systems with entanglement is much more powerful than its classical counterpart MIP [Babai et al., 1991; Zhengfeng Ji et al., 2020; Zhengfeng Ji et al., 2020]: while MIP = NEXP, the quantum class MIP^* is equal to RE, a class including the halting problem. This is because the provers in MIP^* can share unbounded quantum entanglement. However, recent works [Qin and Yao, 2021; Qin and Yao, 2023] have shown that this advantage is significantly reduced if the provers' shared state contains noise. This paper attempts to exactly characterize the effect of noise on the computational power of quantum multiprover interactive proof systems. We investigate the quantum two-prover one-round interactive system MIP^*[poly,O(1)], where the verifier sends polynomially many bits to the provers and the provers send back constantly many bits. We show noise completely destroys the computational advantage given by shared entanglement in this model. Specifically, we show that if the provers are allowed to share arbitrarily many EPR states, where each EPR state is affected by an arbitrarily small constant amount of noise, the resulting complexity class is equivalent to NEXP = MIP. This improves significantly on the previous best-known bound of NEEEXP (nondeterministic triply exponential time) [Qin and Yao, 2021]. We also show that this collapse in power is due to the noise, rather than the O(1) answer size, by showing that allowing for noiseless EPR states gives the class the full power of RE = MIP^*[poly, poly]. Along the way, we develop two technical tools of independent interest. First, we give a new, deterministic tester for the positivity of an exponentially large matrix, provided it has a low-degree Fourier decomposition in terms of Pauli matrices. Secondly, we develop a new invariance principle for smooth matrix functions having bounded third-order Fréchet derivatives or which are Lipschitz continuous.

Cite as

Yangjing Dong, Honghao Fu, Anand Natarajan, Minglong Qin, Haochen Xu, and Penghui Yao. The Computational Advantage of MIP^∗ Vanishes in the Presence of Noise. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 30:1-30:71, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dong_et_al:LIPIcs.CCC.2024.30,
  author =	{Dong, Yangjing and Fu, Honghao and Natarajan, Anand and Qin, Minglong and Xu, Haochen and Yao, Penghui},
  title =	{{The Computational Advantage of MIP^∗ Vanishes in the Presence of Noise}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{30:1--30:71},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.30},
  URN =		{urn:nbn:de:0030-drops-204263},
  doi =		{10.4230/LIPIcs.CCC.2024.30},
  annote =	{Keywords: Interactive proofs, Quantum complexity theory, Quantum entanglement, Fourier analysis, Matrix analysis, Invariance principle, Derandomization, PCP, Locally testable code, Positivity testing}
}
Document
Engineering A* Search for the Flip Distance of Plane Triangulations

Authors: Philip Mayer and Petra Mutzel

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
The flip distance for two triangulations of a point set is defined as the smallest number of edge flips needed to transform one triangulation into another, where an edge flip is the act of replacing an edge of a triangulation by a different edge such that the result remains a triangulation. We adapt and engineer a sophisticated A* search algorithm acting on the so-called flip graph. In particular, we prove that previously proposed lower bounds for the flip distance form consistent heuristics for A* and show that they can be computed efficiently using dynamic algorithms. As an alternative approach, we present an integer linear program (ILP) for the flip distance problem. We experimentally evaluate our approaches on a new real-world benchmark data set based on an application in geodesy, namely sea surface reconstruction. Our evaluation reveals that A* search consistently outperforms our ILP formulation as well as a naive baseline, which is bidirectional breadth-first search. In particular, the runtime of our approach improves upon the baseline by more than two orders of magnitude. Furthermore, our A* search successfully solves most of the considered sea surface instances with up to 41 points. This is a substantial improvement compared to the baseline, which struggles with subsets of the real-world data of size 25. Lastly, to allow the consideration of global sea level data, we developed a decomposition-based heuristic for the flip distance. In our experiments it yields optimal flip distance values for most of the considered sea level data and it can be applied to large data sets due to its fast runtime.

Cite as

Philip Mayer and Petra Mutzel. Engineering A* Search for the Flip Distance of Plane Triangulations. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 23:1-23:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mayer_et_al:LIPIcs.SEA.2024.23,
  author =	{Mayer, Philip and Mutzel, Petra},
  title =	{{Engineering A* Search for the Flip Distance of Plane Triangulations}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{23:1--23:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.23},
  URN =		{urn:nbn:de:0030-drops-203887},
  doi =		{10.4230/LIPIcs.SEA.2024.23},
  annote =	{Keywords: Computational Geometry, Triangulations, Flip Distance, A-star Search, Integer Linear Programming}
}
Document
Crêpe: Clock-Reconfiguration-Aware Preemption Control in Real-Time Systems with Devices

Authors: Eva Dengler and Peter Wägemann

Published in: LIPIcs, Volume 298, 36th Euromicro Conference on Real-Time Systems (ECRTS 2024)


Abstract
The domain of energy-constrained real-time systems that are operated on modern embedded system-on-chip (SoC) platforms brings numerous novel challenges for optimal resource minimization. These modern hardware platforms offer a heterogeneous variety of features to configure the tradeoff between temporal performance and energy efficiency, which goes beyond the state-of-the-art of existing dynamic-voltage-frequency-scaling (DVFS) scheduling schemes. The control center for configuring this tradeoff on platforms are complex clock subsystems that are intertwined with requirements of the SoC’s components (e.g., transceiver/memory/sensor devices). That is, several devices have precedence constraints with respect to specific clock sources and their settings. The challenge of dynamically adapting the various clock sources to select resource-optimal configurations becomes especially challenging in the presence of asynchronous preemptions, which are inherent to systems that use devices. In this paper, we present Crêpe, an approach to clock-reconfiguration-aware preemption control: Crêpe has an understanding of the target platform’s clock subsystem, its sleep states, and penalties to reconfigure clock sources for adapting clock frequencies. Crêpe’s hardware model is combined with an awareness of the application’s device requirements for each executed task, as well as possible interrupts that cause preemptions during runtime. Using these software/hardware constraints, Crêpe employs, in its offline phase, a mathematical formalization in order to select energy-minimal configurations while meeting given deadlines. This optimizing formalization, processed by standard mathematical solver tools, accounts for potentially occurring interrupts and the respective clock reconfigurations, which are then forwarded as alternative schedules to Crêpe’s runtime system. During runtime, the dispatcher assesses these offline-determined alternative schedules and reconfigures the clock sources for energy minimization. We developed an implementation based on a widely-used SoC platform (i.e., ESP32-C3) and an automated testbed for comprehensive energy-consumption evaluations to validate Crêpe’s claim of selecting resource-optimal settings under worst-case considerations.

Cite as

Eva Dengler and Peter Wägemann. Crêpe: Clock-Reconfiguration-Aware Preemption Control in Real-Time Systems with Devices. In 36th Euromicro Conference on Real-Time Systems (ECRTS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 298, pp. 10:1-10:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dengler_et_al:LIPIcs.ECRTS.2024.10,
  author =	{Dengler, Eva and W\"{a}gemann, Peter},
  title =	{{Cr\^{e}pe: Clock-Reconfiguration-Aware Preemption Control in Real-Time Systems with Devices}},
  booktitle =	{36th Euromicro Conference on Real-Time Systems (ECRTS 2024)},
  pages =	{10:1--10:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-324-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{298},
  editor =	{Pellizzoni, Rodolfo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2024.10},
  URN =		{urn:nbn:de:0030-drops-203135},
  doi =		{10.4230/LIPIcs.ECRTS.2024.10},
  annote =	{Keywords: energy-constrained real-time systems, time/energy tradeoff, system-on-chip, energy-aware real-time scheduling, resource minimization, preemption control, worst-case energy consumption (WCEC), worst-case execution time (WCET), static whole-system analysis}
}
Document
Response Time Analysis for Fixed-Priority Preemptive Uniform Multiprocessor Systems

Authors: Binqi Sun, Tomasz Kloda, and Marco Caccamo

Published in: LIPIcs, Volume 298, 36th Euromicro Conference on Real-Time Systems (ECRTS 2024)


Abstract
We present a response time analysis for global fixed-priority preemptive scheduling of constrained-deadline tasks upon a uniform multiprocessor where each processor can be characterized by a different speed. A fixed-priority scheduler assigns the jobs with the highest priorities to the fastest processors. Since determining whether all tasks can meet their deadlines is generally intractable even with identical processors, we propose two sufficient schedulability tests that calculate upper bounds on the task’s worst-case response time within polynomial and pseudo-polynomial time. The proposed tests leverage the linear programming model to upper bound the interference of the higher-priority tasks. Furthermore, we identify specific conditions and platforms upon which the problem can be solved more efficiently within linear time. These formulations are used to iteratively evaluate and refine possible solutions until a safe upper bound on the task’s worst-case response time is found. Additionally, we demonstrate that, with specific minor modifications, the proposed tests are compatible with Audsley’s optimal priority assignment. Experimental evaluations performed on synthetic task sets show that the proposed approach outperforms the state-of-the-art methods.

Cite as

Binqi Sun, Tomasz Kloda, and Marco Caccamo. Response Time Analysis for Fixed-Priority Preemptive Uniform Multiprocessor Systems. In 36th Euromicro Conference on Real-Time Systems (ECRTS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 298, pp. 17:1-17:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{sun_et_al:LIPIcs.ECRTS.2024.17,
  author =	{Sun, Binqi and Kloda, Tomasz and Caccamo, Marco},
  title =	{{Response Time Analysis for Fixed-Priority Preemptive Uniform Multiprocessor Systems}},
  booktitle =	{36th Euromicro Conference on Real-Time Systems (ECRTS 2024)},
  pages =	{17:1--17:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-324-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{298},
  editor =	{Pellizzoni, Rodolfo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2024.17},
  URN =		{urn:nbn:de:0030-drops-203201},
  doi =		{10.4230/LIPIcs.ECRTS.2024.17},
  annote =	{Keywords: Real-time scheduling, Uniform multiprocessor, Response time analysis}
}
Document
On Min-Max Graph Balancing with Strict Negative Correlation Constraints

Authors: Ting-Yu Kuo, Yu-Han Chen, Andrea Frosini, Sun-Yuan Hsieh, Shi-Chun Tsai, and Mong-Jen Kao

Published in: LIPIcs, Volume 283, 34th International Symposium on Algorithms and Computation (ISAAC 2023)


Abstract
We consider the min-max graph balancing problem with strict negative correlation (SNC) constraints. The graph balancing problem arises as an equivalent formulation of the classic unrelated machine scheduling problem, where we are given a hypergraph G = (V,E) with vertex-dependent edge weight function p: E×V ↦ ℤ^{≥0} that represents the processing time of the edges (jobs). The SNC constraints, which are given as edge subsets C_1,C_2,…,C_k, require that the edges in the same subset cannot be assigned to the same vertex at the same time. Under these constraints, the goal is to compute an edge orientation (assignment) that minimizes the maximum workload of the vertices. In this paper, we conduct a general study on the approximability of this problem. First, we show that, in the presence of SNC constraints, the case with max_{e ∈ E} |e| = max_i |C_i| = 2 is the only case for which approximation solutions can be obtained. Further generalization on either direction, e.g., max_{e ∈ E} |e| or max_i |C_i|, will directly make computing a feasible solution an NP-complete problem to solve. Then, we present a 2-approximation algorithm for the case with max_{e ∈ E} |e| = max_i |C_i| = 2, based on a set of structural simplifications and a tailored assignment LP for this problem. We note that our approach is general and can be applied to similar settings, e.g., scheduling with SNC constraints to minimize the weighted completion time, to obtain similar approximation guarantees. Further cases are discussed to describe the landscape of the approximability of this prbolem. For the case with |V| ≤ 2, which is already known to be NP-hard, we present a fully-polynomial time approximation scheme (FPTAS). On the other hand, we show that the problem is at least as hard as vertex cover to approximate when |V| ≥ 3.

Cite as

Ting-Yu Kuo, Yu-Han Chen, Andrea Frosini, Sun-Yuan Hsieh, Shi-Chun Tsai, and Mong-Jen Kao. On Min-Max Graph Balancing with Strict Negative Correlation Constraints. In 34th International Symposium on Algorithms and Computation (ISAAC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 283, pp. 50:1-50:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kuo_et_al:LIPIcs.ISAAC.2023.50,
  author =	{Kuo, Ting-Yu and Chen, Yu-Han and Frosini, Andrea and Hsieh, Sun-Yuan and Tsai, Shi-Chun and Kao, Mong-Jen},
  title =	{{On Min-Max Graph Balancing with Strict Negative Correlation Constraints}},
  booktitle =	{34th International Symposium on Algorithms and Computation (ISAAC 2023)},
  pages =	{50:1--50:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-289-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{283},
  editor =	{Iwata, Satoru and Kakimura, Naonori},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2023.50},
  URN =		{urn:nbn:de:0030-drops-193524},
  doi =		{10.4230/LIPIcs.ISAAC.2023.50},
  annote =	{Keywords: Unrelated Scheduling, Graph Balancing, Strict Correlation Constraints}
}
Document
Practical Relativistic Zero-Knowledge for NP

Authors: Claude Crépeau, Arnaud Y. Massenet, Louis Salvail, Lucas Shigeru Stinchcombe, and Nan Yang

Published in: LIPIcs, Volume 163, 1st Conference on Information-Theoretic Cryptography (ITC 2020)


Abstract
In a Multi-Prover environment, how little spatial separation is sufficient to assert the validity of an NP statement in Perfect Zero-Knowledge ? We exhibit a set of two novel Zero-Knowledge protocols for the 3-COLorability problem that use two (local) provers or three (entangled) provers and only require exchanging one edge and two bits with two trits per prover. This greatly improves the ability to prove Zero-Knowledge statements on very short distances with very basic communication gear.

Cite as

Claude Crépeau, Arnaud Y. Massenet, Louis Salvail, Lucas Shigeru Stinchcombe, and Nan Yang. Practical Relativistic Zero-Knowledge for NP. In 1st Conference on Information-Theoretic Cryptography (ITC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 163, pp. 4:1-4:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{crepeau_et_al:LIPIcs.ITC.2020.4,
  author =	{Cr\'{e}peau, Claude and Massenet, Arnaud Y. and Salvail, Louis and Stinchcombe, Lucas Shigeru and Yang, Nan},
  title =	{{Practical Relativistic Zero-Knowledge for NP}},
  booktitle =	{1st Conference on Information-Theoretic Cryptography (ITC 2020)},
  pages =	{4:1--4:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-151-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{163},
  editor =	{Tauman Kalai, Yael and Smith, Adam D. and Wichs, Daniel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2020.4},
  URN =		{urn:nbn:de:0030-drops-121091},
  doi =		{10.4230/LIPIcs.ITC.2020.4},
  annote =	{Keywords: Multi-Prover Interactive Proofs, Relativistic Commitments, 3-COLorability, Quantum Entanglement, Non-Locality}
}
Document
The RGB No-Signalling Game

Authors: Xavier Coiteux-Roy and Claude Crépeau

Published in: LIPIcs, Volume 135, 14th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2019)


Abstract
Introducing the simplest of all No-Signalling Games: the RGB Game where two verifiers interrogate two provers, Alice and Bob, far enough from each other that communication between them is too slow to be possible. Each prover may be independently queried one of three possible colours: Red, Green or Blue. Let a be the colour announced to Alice and b be announced to Bob. To win the game they must reply colours x (resp. y) such that a != x != y != b. This work focuses on this new game mainly as a pedagogical tool for its simplicity but also because it triggered us to introduce a new set of definitions for reductions among multi-party probability distributions and related non-locality classes. We show that a particular winning strategy for the RGB Game is equivalent to the PR-Box of Popescu-Rohrlich and thus No-Signalling. Moreover, we use this example to define No-Signalling in a new useful way, as the intersection of two natural classes of multi-party probability distributions called one-way signalling. We exhibit a quantum strategy able to beat the classical local maximum winning probability of 8/9 shifting it up to 11/12. Optimality of this quantum strategy is demonstrated using the standard tool of semidefinite programming.

Cite as

Xavier Coiteux-Roy and Claude Crépeau. The RGB No-Signalling Game. In 14th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 135, pp. 4:1-4:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{coiteuxroy_et_al:LIPIcs.TQC.2019.4,
  author =	{Coiteux-Roy, Xavier and Cr\'{e}peau, Claude},
  title =	{{The RGB No-Signalling Game}},
  booktitle =	{14th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2019)},
  pages =	{4:1--4:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-112-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{135},
  editor =	{van Dam, Wim and Man\v{c}inska, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2019.4},
  URN =		{urn:nbn:de:0030-drops-103965},
  doi =		{10.4230/LIPIcs.TQC.2019.4},
  annote =	{Keywords: No-Signalling, Quantum Entanglement, Non-Locality, Bell inequality, Semidefinite Programming, Non-locality Hierarchy}
}
  • Refine by Author
  • 2 Crépeau, Claude
  • 1 Baldimtsi, Foteini
  • 1 Bonlarron, Alexandre
  • 1 Borndörfer, Ralf
  • 1 Caccamo, Marco
  • Show More...

  • Refine by Classification
  • 2 Computer systems organization → Real-time systems
  • 2 Theory of computation → Quantum information theory
  • 1 Applied computing → Transportation
  • 1 Computer systems organization → Embedded systems
  • 1 Computing methodologies → Learning linear models
  • Show More...

  • Refine by Keyword
  • 2 Non-Locality
  • 2 Quantum Entanglement
  • 1 3-COLorability
  • 1 A-star Search
  • 1 Activity Recognition
  • Show More...

  • Refine by Type
  • 14 document

  • Refine by Publication Year
  • 11 2024
  • 1 2019
  • 1 2020
  • 1 2023

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail