85 Search Results for "Liu-Zhang, Chen-Da"


Document
Short Paper
FLEX: Fault Localization and Explanation Using Open-Source Large Language Models in Powertrain Systems (Short Paper)

Authors: Herbert Muehlburger and Franz Wotawa

Published in: OASIcs, Volume 125, 35th International Conference on Principles of Diagnosis and Resilient Systems (DX 2024)


Abstract
Cyber-physical systems (CPS) are critical to modern infrastructure, but are vulnerable to faults and anomalies that threaten their operational safety. In this work, we evaluate the use of open-source Large Language Models (LLMs), such as Mistral 7B, Llama3.1:8b-instruct-fp16, and others to detect anomalies in two distinct datasets: battery management and powertrain systems. Our methodology utilises retrieval-augmented generation (RAG) techniques, incorporating a novel two-step process where LLMs first infer operational rules from normal behavior before applying these rules for fault detection. During the experiments, we found that the original prompt design yielded strong results for the battery dataset but required modification for the powertrain dataset to improve performance. The adjusted prompt, which emphasises rule inference, significantly improved anomaly detection for the powertrain dataset. Experimental results show that models like Mistral 7B achieved F1-scores up to 0.99, while Llama3.1:8b-instruct-fp16 and Gemma 2 reached perfect F1-scores of 1.0 in complex scenarios. These findings demonstrate the impact of effective prompt design and rule inference in improving LLM-based fault detection for CPS, contributing to increased operational resilience.

Cite as

Herbert Muehlburger and Franz Wotawa. FLEX: Fault Localization and Explanation Using Open-Source Large Language Models in Powertrain Systems (Short Paper). In 35th International Conference on Principles of Diagnosis and Resilient Systems (DX 2024). Open Access Series in Informatics (OASIcs), Volume 125, pp. 25:1-25:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{muehlburger_et_al:OASIcs.DX.2024.25,
  author =	{Muehlburger, Herbert and Wotawa, Franz},
  title =	{{FLEX: Fault Localization and Explanation Using Open-Source Large Language Models in Powertrain Systems}},
  booktitle =	{35th International Conference on Principles of Diagnosis and Resilient Systems (DX 2024)},
  pages =	{25:1--25:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-356-0},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{125},
  editor =	{Pill, Ingo and Natan, Avraham and Wotawa, Franz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.DX.2024.25},
  URN =		{urn:nbn:de:0030-drops-221170},
  doi =		{10.4230/OASIcs.DX.2024.25},
  annote =	{Keywords: Fault detection, anomaly detection, powertrain systems, large language models, open-source LLMs}
}
Document
Real-Time Data-Driven Maintenance Logistics: A Public-Private Collaboration

Authors: Willem van Jaarsveld, Alp Akçay, Laurens Bliek, Paulo da Costa, Mathijs de Weerdt, Rik Eshuis, Stella Kapodistria, Uzay Kaymak, Verus Pronk, Geert-Jan van Houtum, Peter Verleijsdonk, Sicco Verwer, Simon Voorberg, and Yingqian Zhang

Published in: OASIcs, Volume 124, Commit2Data (2024)


Abstract
The project "Real-time data-driven maintenance logistics" was initiated with the purpose of bringing innovations in data-driven decision making to maintenance logistics, by bringing problem owners in the form of three innovative companies together with researchers at two leading knowledge institutions. This paper reviews innovations in three related areas: How the innovations were inspired by practice, how they materialized, and how the results impact practice.

Cite as

Willem van Jaarsveld, Alp Akçay, Laurens Bliek, Paulo da Costa, Mathijs de Weerdt, Rik Eshuis, Stella Kapodistria, Uzay Kaymak, Verus Pronk, Geert-Jan van Houtum, Peter Verleijsdonk, Sicco Verwer, Simon Voorberg, and Yingqian Zhang. Real-Time Data-Driven Maintenance Logistics: A Public-Private Collaboration. In Commit2Data. Open Access Series in Informatics (OASIcs), Volume 124, pp. 5:1-5:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{vanjaarsveld_et_al:OASIcs.Commit2Data.5,
  author =	{van Jaarsveld, Willem and Ak\c{c}ay, Alp and Bliek, Laurens and da Costa, Paulo and de Weerdt, Mathijs and Eshuis, Rik and Kapodistria, Stella and Kaymak, Uzay and Pronk, Verus and van Houtum, Geert-Jan and Verleijsdonk, Peter and Verwer, Sicco and Voorberg, Simon and Zhang, Yingqian},
  title =	{{Real-Time Data-Driven Maintenance Logistics: A Public-Private Collaboration}},
  booktitle =	{Commit2Data},
  pages =	{5:1--5:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-351-5},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{124},
  editor =	{Haverkort, Boudewijn R. and de Jongste, Aldert and van Kuilenburg, Pieter and Vromans, Ruben D.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.Commit2Data.5},
  URN =		{urn:nbn:de:0030-drops-213626},
  doi =		{10.4230/OASIcs.Commit2Data.5},
  annote =	{Keywords: Data, Maintenance, Logistics, Optimization, Research, Project}
}
Document
DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance

Authors: Rundong Gan, Liyi Zhou, Le Wang, Kaihua Qin, and Xiaodong Lin

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Decentralized Finance (DeFi) has witnessed a monumental surge, reaching 53.039 billion USD in total value locked. As this sector continues to expand, ensuring the reliability of DeFi smart contracts becomes increasingly crucial. While some users are adept at reading code or the compiled bytecode to understand smart contracts, many rely on documentation. Therefore, discrepancies between the documentation and the deployed code can pose significant risks, whether these discrepancies are due to errors or intentional fraud. To tackle these challenges, we developed DeFiAligner, an end-to-end system to identify inconsistencies between documentation and smart contracts. DeFiAligner incorporates a symbolic execution tool, SEVM, which explores execution paths of on-chain binary code, recording memory and stack states. It automatically generates symbolic expressions for token balance changes and branch conditions, which, along with related project documents, are processed by LLMs. Using structured prompts, the LLMs evaluate the alignment between the symbolic expressions and the documentation. Our tests across three distinct scenarios demonstrate DeFiAligner’s capability to automate inconsistency detection in DeFi, achieving recall rates of 92% and 90% on two public datasets respectively.

Cite as

Rundong Gan, Liyi Zhou, Le Wang, Kaihua Qin, and Xiaodong Lin. DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 7:1-7:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gan_et_al:LIPIcs.AFT.2024.7,
  author =	{Gan, Rundong and Zhou, Liyi and Wang, Le and Qin, Kaihua and Lin, Xiaodong},
  title =	{{DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{7:1--7:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.7},
  URN =		{urn:nbn:de:0030-drops-209431},
  doi =		{10.4230/LIPIcs.AFT.2024.7},
  annote =	{Keywords: Decentralized Finance Security, Large Language Models, Project Review, Symbolic Analysis, Smart Contracts}
}
Document
Profitable Manipulations of Cryptographic Self-Selection Are Statistically Detectable

Authors: Linda Cai, Jingyi Liu, S. Matthew Weinberg, and Chenghan Zhou

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Cryptographic Self-Selection is a common primitive underlying leader-selection for Proof-of-Stake blockchain protocols. The concept was first popularized in Algorand [Jing Chen and Silvio Micali, 2019], who also observed that the protocol might be manipulable. [Matheus V. X. Ferreira et al., 2022] provide a concrete manipulation that is strictly profitable for a staker of any size (and also prove upper bounds on the gains from manipulation). Separately, [Maryam Bahrani and S. Matthew Weinberg, 2024; Aviv Yaish et al., 2023] initiate the study of undetectable profitable manipulations of consensus protocols with a focus on the seminal Selfish Mining strategy [Eyal and Sirer, 2014] for Bitcoin’s Proof-of-Work longest-chain protocol. They design a Selfish Mining variant that, for sufficiently large miners, is strictly profitable yet also indistinguishable to an onlooker from routine latency (that is, a sufficiently large profit-maximizing miner could use their strategy to strictly profit over being honest in a way that still appears to the rest of the network as though everyone is honest but experiencing mildly higher latency. This avoids any risk of negatively impacting the value of the underlying cryptocurrency due to attack detection). We investigate the detectability of profitable manipulations of the canonical cryptographic self-selection leader selection protocol introduced in [Jing Chen and Silvio Micali, 2019] and studied in [Matheus V. X. Ferreira et al., 2022], and establish that for any player with α < (3-√5)/2 ≈ 0.38 fraction of the total stake, every strictly profitable manipulation is statistically detectable. Specifically, we consider an onlooker who sees only the random seed of each round (and does not need to see any other broadcasts by any other players). We show that the distribution of the sequence of random seeds when any player is profitably manipulating the protocol is inconsistent with any distribution that could arise by honest stakers being offline or timing out (for a natural stylized model of honest timeouts).

Cite as

Linda Cai, Jingyi Liu, S. Matthew Weinberg, and Chenghan Zhou. Profitable Manipulations of Cryptographic Self-Selection Are Statistically Detectable. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 30:1-30:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.AFT.2024.30,
  author =	{Cai, Linda and Liu, Jingyi and Weinberg, S. Matthew and Zhou, Chenghan},
  title =	{{Profitable Manipulations of Cryptographic Self-Selection Are Statistically Detectable}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{30:1--30:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.30},
  URN =		{urn:nbn:de:0030-drops-209660},
  doi =		{10.4230/LIPIcs.AFT.2024.30},
  annote =	{Keywords: Blockchain, Cryptocurrency, Proof-of-Stake, Strategic Mining, Statistical Detection}
}
Document
RANDOM
Near-Linear Time Samplers for Matroid Independent Sets with Applications

Authors: Xiaoyu Chen, Heng Guo, Xinyuan Zhang, and Zongrui Zou

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We give a Õ(n) time almost uniform sampler for independent sets of a matroid, whose ground set has n elements and is given by an independence oracle. As a consequence, one can sample connected spanning subgraphs of a given graph G = (V,E) in Õ(|E|) time, whereas the previous best algorithm takes O(|E||V|) time. This improvement, in turn, leads to a faster running time on estimating all-terminal network reliability. Furthermore, we generalise this near-linear time sampler to the random cluster model with q ≤ 1.

Cite as

Xiaoyu Chen, Heng Guo, Xinyuan Zhang, and Zongrui Zou. Near-Linear Time Samplers for Matroid Independent Sets with Applications. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 32:1-32:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.APPROX/RANDOM.2024.32,
  author =	{Chen, Xiaoyu and Guo, Heng and Zhang, Xinyuan and Zou, Zongrui},
  title =	{{Near-Linear Time Samplers for Matroid Independent Sets with Applications}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{32:1--32:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.32},
  URN =		{urn:nbn:de:0030-drops-210254},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.32},
  annote =	{Keywords: Network reliability, Random cluster modek, Matroid, Bases-exchange walk}
}
Document
HOBBIT: Hashed OBject Based InTegrity

Authors: Matthias Bernad and Stefan Brunthaler

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
C vulnerabilities usually hold verbatim for C++ programs. The counterfeit-object-oriented programming attack demonstrated that this relation is asymmetric, i.e., it only applies to C++. The problem pinpointed by this COOP attack is that C++ does not validate the integrity of its objects. By injecting malicious objects with manipulated virtual function table pointers, attackers can hijack control-flow of programs. The software security community addressed the COOP-problem in the years following its discovery, but together with the emergence of transient-execution attacks, such as Spectre, researchers also shifted their attention. We present Hobbit, a software-only solution to prevent COOP attacks by validating object integrity for virtual function pointer tables. Hobbit does not require any hardware specific features, scales to multi-million lines of C++ source code, and our LLVM-based implementation offers a configurable performance impact between 121.63% and 2.80% on compute-intensive SPEC CPU C++ benchmarks. Hobbit’s security analysis indicates strong resistance to brute forcing attacks and demonstrates additional benefits of using execute-only memory.

Cite as

Matthias Bernad and Stefan Brunthaler. HOBBIT: Hashed OBject Based InTegrity. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 7:1-7:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bernad_et_al:LIPIcs.ECOOP.2024.7,
  author =	{Bernad, Matthias and Brunthaler, Stefan},
  title =	{{HOBBIT: Hashed OBject Based InTegrity}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{7:1--7:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.7},
  URN =		{urn:nbn:de:0030-drops-208566},
  doi =		{10.4230/LIPIcs.ECOOP.2024.7},
  annote =	{Keywords: software security, code-reuse attacks, language-based security, counterfeit-object-oriented programming, object integrity, compiler security}
}
Document
Short Paper
Evaluating the Ability of Large Language Models to Reason About Cardinal Directions (Short Paper)

Authors: Anthony G Cohn and Robert E Blackwell

Published in: LIPIcs, Volume 315, 16th International Conference on Spatial Information Theory (COSIT 2024)


Abstract
We investigate the abilities of a representative set of Large language Models (LLMs) to reason about cardinal directions (CDs). To do so, we create two datasets: the first, co-created with ChatGPT, focuses largely on recall of world knowledge about CDs; the second is generated from a set of templates, comprehensively testing an LLM’s ability to determine the correct CD given a particular scenario. The templates allow for a number of degrees of variation such as means of locomotion of the agent involved, and whether set in the first , second or third person. Even with a temperature setting of zero, Our experiments show that although LLMs are able to perform well in the simpler dataset, in the second more complex dataset no LLM is able to reliably determine the correct CD, even with a temperature setting of zero.

Cite as

Anthony G Cohn and Robert E Blackwell. Evaluating the Ability of Large Language Models to Reason About Cardinal Directions (Short Paper). In 16th International Conference on Spatial Information Theory (COSIT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 315, pp. 28:1-28:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cohn_et_al:LIPIcs.COSIT.2024.28,
  author =	{Cohn, Anthony G and Blackwell, Robert E},
  title =	{{Evaluating the Ability of Large Language Models to Reason About Cardinal Directions}},
  booktitle =	{16th International Conference on Spatial Information Theory (COSIT 2024)},
  pages =	{28:1--28:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-330-0},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{315},
  editor =	{Adams, Benjamin and Griffin, Amy L. and Scheider, Simon and McKenzie, Grant},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.COSIT.2024.28},
  URN =		{urn:nbn:de:0030-drops-208432},
  doi =		{10.4230/LIPIcs.COSIT.2024.28},
  annote =	{Keywords: Large Language Models, Spatial Reasoning, Cardinal Directions}
}
Document
Designing 3D RNA Origami Nanostructures with a Minimum Number of Kissing Loops

Authors: Antti Elonen and Pekka Orponen

Published in: LIPIcs, Volume 314, 30th International Conference on DNA Computing and Molecular Programming (DNA 30) (2024)


Abstract
We present a general design technique for rendering any 3D wireframe model, that is any connected graph linearly embedded in 3D space, as an RNA origami nanostructure with a minimum number of kissing loops. The design algorithm, which applies some ideas and methods from topological graph theory, produces renderings that contain at most one kissing-loop pair for many interesting model families, including for instance all fully triangulated wireframes and the wireframes of all Platonic solids. The design method is already implemented and available for use in the design tool DNAforge (https://dnaforge.org).

Cite as

Antti Elonen and Pekka Orponen. Designing 3D RNA Origami Nanostructures with a Minimum Number of Kissing Loops. In 30th International Conference on DNA Computing and Molecular Programming (DNA 30). Leibniz International Proceedings in Informatics (LIPIcs), Volume 314, pp. 4:1-4:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{elonen_et_al:LIPIcs.DNA.30.4,
  author =	{Elonen, Antti and Orponen, Pekka},
  title =	{{Designing 3D RNA Origami Nanostructures with a Minimum Number of Kissing Loops}},
  booktitle =	{30th International Conference on DNA Computing and Molecular Programming (DNA 30)},
  pages =	{4:1--4:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-344-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{314},
  editor =	{Seki, Shinnosuke and Stewart, Jaimie Marie},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DNA.30.4},
  URN =		{urn:nbn:de:0030-drops-209325},
  doi =		{10.4230/LIPIcs.DNA.30.4},
  annote =	{Keywords: RNA origami, wireframe nanostructures, polyhedra, kissing loops, topological graph embeddings, self-assembly}
}
Document
An Efficient Local Search Solver for Mixed Integer Programming

Authors: Peng Lin, Mengchuan Zou, and Shaowei Cai

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
Mixed integer programming (MIP) is a fundamental model in operations research. Local search is a powerful method for solving hard problems, but the development of local search solvers for MIP still needs to be explored. This work develops an efficient local search solver for solving MIP, called Local-MIP. We propose two new operators for MIP to adaptively modify variables for optimizing the objective function and satisfying constraints, respectively. Furthermore, we design a new weighting scheme to dynamically balance the priority between the objective function and each constraint, and propose a two-level scoring function structure to hierarchically guide the search for high-quality feasible solutions. Experiments are conducted on seven public benchmarks to compare Local-MIP with state-of-the-art MIP solvers, which demonstrate that Local-MIP significantly outperforms CPLEX, HiGHS, SCIP and Feasibility Jump, and is competitive with the most powerful commercial solver Gurobi. Moreover, Local-MIP establishes 4 new records for MIPLIB open instances.

Cite as

Peng Lin, Mengchuan Zou, and Shaowei Cai. An Efficient Local Search Solver for Mixed Integer Programming. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 19:1-19:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{lin_et_al:LIPIcs.CP.2024.19,
  author =	{Lin, Peng and Zou, Mengchuan and Cai, Shaowei},
  title =	{{An Efficient Local Search Solver for Mixed Integer Programming}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{19:1--19:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.19},
  URN =		{urn:nbn:de:0030-drops-207041},
  doi =		{10.4230/LIPIcs.CP.2024.19},
  annote =	{Keywords: Mixed Integer Programming, Local Search, Operator, Scoring Function}
}
Document
Constraint Modelling with LLMs Using In-Context Learning

Authors: Kostis Michailidis, Dimos Tsouros, and Tias Guns

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
Constraint Programming (CP) allows for the modelling and solving of a wide range of combinatorial problems. However, modelling such problems using constraints over decision variables still requires significant expertise, both in conceptual thinking and syntactic use of modelling languages. In this work, we explore the potential of using pre-trained Large Language Models (LLMs) as coding assistants, to transform textual problem descriptions into concrete and executable CP specifications. We present different transformation pipelines with explicit intermediate representations, and we investigate the potential benefit of various retrieval-augmented example selection strategies for in-context learning. We evaluate our approach on 2 datasets from the literature, namely NL4Opt (optimisation) and Logic Grid Puzzles (satisfaction), and a heterogeneous set of exercises from a CP course. The results show that pre-trained LLMs have promising potential for initialising the modelling process, with retrieval-augmented in-context learning significantly enhancing their modelling capabilities.

Cite as

Kostis Michailidis, Dimos Tsouros, and Tias Guns. Constraint Modelling with LLMs Using In-Context Learning. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 20:1-20:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{michailidis_et_al:LIPIcs.CP.2024.20,
  author =	{Michailidis, Kostis and Tsouros, Dimos and Guns, Tias},
  title =	{{Constraint Modelling with LLMs Using In-Context Learning}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{20:1--20:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.20},
  URN =		{urn:nbn:de:0030-drops-207053},
  doi =		{10.4230/LIPIcs.CP.2024.20},
  annote =	{Keywords: Constraint Modelling, Constraint Acquisition, Constraint Programming, Large Language Models, In-Context Learning, Natural Language Processing, Named Entity Recognition, Retrieval-Augmented Generation, Optimisation}
}
Document
RobTL: Robustness Temporal Logic for CPS

Authors: Valentina Castiglioni, Michele Loreti, and Simone Tini

Published in: LIPIcs, Volume 311, 35th International Conference on Concurrency Theory (CONCUR 2024)


Abstract
We propose Robustness Temporal Logic (RobTL), a novel temporal logic for the specification and analysis of distances between the behaviours of Cyber-Physical Systems (CPS) over a finite time horizon. RobTL specifications allow us to measure the differences in the behaviours of systems with respect to various objectives and temporal constraints, and to study how those differences evolve in time. Specifically, the unique features of RobTL allow us to specify robustness properties of CPS against uncertainty and perturbations. As an example, we use RobTL to analyse the robustness of an engine system that is subject to attacks aimed at inflicting overstress of equipment.

Cite as

Valentina Castiglioni, Michele Loreti, and Simone Tini. RobTL: Robustness Temporal Logic for CPS. In 35th International Conference on Concurrency Theory (CONCUR 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 311, pp. 15:1-15:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{castiglioni_et_al:LIPIcs.CONCUR.2024.15,
  author =	{Castiglioni, Valentina and Loreti, Michele and Tini, Simone},
  title =	{{RobTL: Robustness Temporal Logic for CPS}},
  booktitle =	{35th International Conference on Concurrency Theory (CONCUR 2024)},
  pages =	{15:1--15:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-339-3},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{311},
  editor =	{Majumdar, Rupak and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2024.15},
  URN =		{urn:nbn:de:0030-drops-207870},
  doi =		{10.4230/LIPIcs.CONCUR.2024.15},
  annote =	{Keywords: Cyber-physical systems, robustness, temporal logic, uncertainty}
}
Document
MEM-Based Pangenome Indexing for k-mer Queries

Authors: Stephen Hwang, Nathaniel K. Brown, Omar Y. Ahmed, Katharine M. Jenike, Sam Kovaka, Michael C. Schatz, and Ben Langmead

Published in: LIPIcs, Volume 312, 24th International Workshop on Algorithms in Bioinformatics (WABI 2024)


Abstract
Pangenomes are growing in number and size, thanks to the prevalence of high-quality long-read assemblies. However, current methods for studying sequence composition and conservation within pangenomes have limitations. Methods based on graph pangenomes require a computationally expensive multiple-alignment step, which can leave out some variation. Indexes based on k-mers and de Bruijn graphs are limited to answering questions at a specific substring length k. We present Maximal Exact Match Ordered (MEMO), a pangenome indexing method based on maximal exact matches (MEMs) between sequences. A single MEMO index can handle arbitrary-length queries over pangenomic windows. MEMO enables both queries that test k-mer presence/absence (membership queries) and that count the number of genomes containing k-mers in a window (conservation queries). MEMO’s index for a pangenome of 89 human autosomal haplotypes fits in 2.04 GB, 8.8× smaller than a comparable KMC3 index and 11.4× smaller than a PanKmer index. MEMO indexes can be made smaller by sacrificing some counting resolution, with our decile-resolution HPRC index reaching 0.67 GB. MEMO can conduct a conservation query for 31-mers over the human leukocyte antigen locus in 13.89 seconds, 2.5× faster than other approaches. MEMO’s small index size, lack of k-mer length dependence, and efficient queries make it a flexible tool for studying and visualizing substring conservation in pangenomes.

Cite as

Stephen Hwang, Nathaniel K. Brown, Omar Y. Ahmed, Katharine M. Jenike, Sam Kovaka, Michael C. Schatz, and Ben Langmead. MEM-Based Pangenome Indexing for k-mer Queries. In 24th International Workshop on Algorithms in Bioinformatics (WABI 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 312, pp. 4:1-4:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hwang_et_al:LIPIcs.WABI.2024.4,
  author =	{Hwang, Stephen and Brown, Nathaniel K. and Ahmed, Omar Y. and Jenike, Katharine M. and Kovaka, Sam and Schatz, Michael C. and Langmead, Ben},
  title =	{{MEM-Based Pangenome Indexing for k-mer Queries}},
  booktitle =	{24th International Workshop on Algorithms in Bioinformatics (WABI 2024)},
  pages =	{4:1--4:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-340-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{312},
  editor =	{Pissis, Solon P. and Sung, Wing-Kin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2024.4},
  URN =		{urn:nbn:de:0030-drops-206482},
  doi =		{10.4230/LIPIcs.WABI.2024.4},
  annote =	{Keywords: Pangenomics, Comparative genomics, Compressed indexing}
}
Document
Multi-qubit Lattice Surgery Scheduling

Authors: Allyson Silva, Xiangyi Zhang, Zak Webb, Mia Kramer, Chan-Woo Yang, Xiao Liu, Jessica Lemieux, Ka-Wai Chen, Artur Scherer, and Pooya Ronagh

Published in: LIPIcs, Volume 310, 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)


Abstract
Fault-tolerant quantum computation using two-dimensional topological quantum error correcting codes can benefit from multi-qubit long-range operations. By using simple commutation rules, a quantum circuit can be transpiled into a sequence of solely non-Clifford multi-qubit gates. Prior work on fault-tolerant compilation avoids optimal scheduling of such gates since they reduce the parallelizability of the circuit. We observe that the reduced parallelization potential is outweighed by the significant reduction in the number of gates. We therefore devise a method for scheduling multi-qubit lattice surgery using an earliest-available-first policy, solving the associated forest packing problem using a representation of the multi-qubit gates as Steiner trees. Our extensive testing on random and various Hamiltonian simulation circuits demonstrates the method’s scalability and performance. We show that the transpilation significantly reduces the circuit length on the set of circuits tested, and that the resulting circuit of multi-qubit gates has a further reduction in the expected circuit execution time compared to serial execution.

Cite as

Allyson Silva, Xiangyi Zhang, Zak Webb, Mia Kramer, Chan-Woo Yang, Xiao Liu, Jessica Lemieux, Ka-Wai Chen, Artur Scherer, and Pooya Ronagh. Multi-qubit Lattice Surgery Scheduling. In 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 310, pp. 1:1-1:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{silva_et_al:LIPIcs.TQC.2024.1,
  author =	{Silva, Allyson and Zhang, Xiangyi and Webb, Zak and Kramer, Mia and Yang, Chan-Woo and Liu, Xiao and Lemieux, Jessica and Chen, Ka-Wai and Scherer, Artur and Ronagh, Pooya},
  title =	{{Multi-qubit Lattice Surgery Scheduling}},
  booktitle =	{19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)},
  pages =	{1:1--1:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-328-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{310},
  editor =	{Magniez, Fr\'{e}d\'{e}ric and Grilo, Alex Bredariol},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2024.1},
  URN =		{urn:nbn:de:0030-drops-206712},
  doi =		{10.4230/LIPIcs.TQC.2024.1},
  annote =	{Keywords: Scheduling, Large-Scale Optimization, Surface Code, Quantum Compilation, Circuit Optimization}
}
Document
The Quantum Decoding Problem

Authors: André Chailloux and Jean-Pierre Tillich

Published in: LIPIcs, Volume 310, 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)


Abstract
One of the founding results of lattice based cryptography is a quantum reduction from the Short Integer Solution (SIS) problem to the Learning with Errors (LWE) problem introduced by Regev. It has recently been pointed out by Chen, Liu and Zhandry [Chen et al., 2022] that this reduction can be made more powerful by replacing the LWE problem with a quantum equivalent, where the errors are given in quantum superposition. In parallel, Regev’s reduction has recently been adapted in the context of code-based cryptography by Debris, Remaud and Tillich [Debris-Alazard et al., 2023], who showed a reduction between the Short Codeword Problem and the Decoding Problem (the DRT reduction). This motivates the study of the Quantum Decoding Problem (QDP), which is the Decoding Problem but with errors in quantum superposition and see how it behaves in the DRT reduction. The purpose of this paper is to introduce and to lay a firm foundation for QDP. We first show QDP is likely to be easier than classical decoding, by proving that it can be solved in quantum polynomial time in a large regime of noise whereas no non-exponential quantum algorithm is known for the classical decoding problem. Then, we show that QDP can even be solved (albeit not necessarily efficiently) beyond the information theoretic Shannon limit for classical decoding. We give precisely the largest noise level where we can solve QDP giving in a sense the information theoretic limit for this new problem. Finally, we study how QDP can be used in the DRT reduction. First, we show that our algorithms can be properly used in the DRT reduction showing that our quantum algorithms for QDP beyond Shannon capacity can be used to find minimal weight codewords in a random code. On the negative side, we show that the DRT reduction cannot be, in all generality, a reduction between finding small codewords and QDP by exhibiting quantum algorithms for QDP where this reduction entirely fails. Our proof techniques include the use of specific quantum measurements, such as q-ary unambiguous state discrimination and pretty good measurements as well as strong concentration bounds on weight distribution of random shifted dual codes, which we relate using quantum Fourier analysis.

Cite as

André Chailloux and Jean-Pierre Tillich. The Quantum Decoding Problem. In 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 310, pp. 6:1-6:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chailloux_et_al:LIPIcs.TQC.2024.6,
  author =	{Chailloux, Andr\'{e} and Tillich, Jean-Pierre},
  title =	{{The Quantum Decoding Problem}},
  booktitle =	{19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)},
  pages =	{6:1--6:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-328-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{310},
  editor =	{Magniez, Fr\'{e}d\'{e}ric and Grilo, Alex Bredariol},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2024.6},
  URN =		{urn:nbn:de:0030-drops-206767},
  doi =		{10.4230/LIPIcs.TQC.2024.6},
  annote =	{Keywords: quantum information theory, code-based cryptography, quantum algorithms}
}
Document
Scheduling with Locality by Routing

Authors: Alison Hsiang-Hsuan Liu and Fu-Hong Liu

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
This work examines a strongly NP-hard routing problem on trees, in which multiple servers need to serve a given set of requests (on vertices), where the routes of the servers start from a common source and end at their respective terminals. Each server can travel free of cost on its source-to-terminal path but has to pay for travel on other edges. The objective is to minimize the maximum cost over all servers. As the servers may pay different costs for traveling through a common edge, balancing the loads of the servers can be difficult. We propose a polynomial-time 4-approximation algorithm that applies the parametric pruning framework but consists of two phases. The first phase of the algorithm partitions the requests into packets, and the second phase of the algorithm assigns the packets to the servers. Unlike the standard parametric pruning techniques, the challenge of our algorithm design and analysis is to harmoniously relate the quality of the partition in the first phase, the balances of the servers' loads in the second phase, and the hypothetical optimal values of the framework. For the problem in general graphs, we show that there is no algorithm better than 2-approximate unless P = NP. The problem is a generalization of unrelated machine scheduling and other classic scheduling problems. It also models scheduling problems where the job processing times depend on the machine serving the job and the other jobs served by that machine. This modeling provides a framework that physicalizes scheduling problems through the graph’s point of view.

Cite as

Alison Hsiang-Hsuan Liu and Fu-Hong Liu. Scheduling with Locality by Routing. In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 69:1-69:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.MFCS.2024.69,
  author =	{Liu, Alison Hsiang-Hsuan and Liu, Fu-Hong},
  title =	{{Scheduling with Locality by Routing}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{69:1--69:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.69},
  URN =		{urn:nbn:de:0030-drops-206250},
  doi =		{10.4230/LIPIcs.MFCS.2024.69},
  annote =	{Keywords: Makespan minimization, Approximation algorithms, Routing problems, Parametric pruning framework}
}
  • Refine by Author
  • 9 Liu-Zhang, Chen-Da
  • 4 Chen, Lin
  • 4 Zhang, Guochuan
  • 4 Zheng, Da Wei
  • 3 Goyal, Vipul
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 5 broadcast
  • 4 Large Language Models
  • 3 Scheduling
  • 3 local search
  • 3 partial broadcast
  • Show More...

  • Refine by Type
  • 85 document

  • Refine by Publication Year
  • 24 2024
  • 21 2023
  • 8 2021
  • 8 2022
  • 6 2020
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail