43 Search Results for "Wu, Yi-Chieh"


Document
SubModST: A Fast Generic Solver for Submodular Maximization with Size Constraints

Authors: Henning Martin Woydt, Christian Komusiewicz, and Frank Sommer

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In the Cardinality-Constrained Maximization (Minimization) problem the input is a universe 𝒰, a function f: 2^{{𝒰}} → ℝ, and an integer k, and the task is to find a set S ⊆ 𝒰 with |S| ≤ k that maximizes (minimizes) f(S). Many well-studied problems such as Facility Location, Partial Dominating Set, Group Closeness Centrality and Euclidean k-Medoid Clustering are special cases of Cardinality-Constrained Maximization (Minimization). All the above-mentioned problems have the diminishing return property, that is, the improvement of adding an element e ∈ 𝒰 to a set S is at least as large as adding e to any superset of S. This property is called submodularity for maximization problems and supermodularity for minimization problems. In this work we develop a new exact branch-and-cut algorithm SubModST for the generic Submodular Cardinality-Constrained Maximization and Supermodular Cardinality-Constrained Minimization. We develop several speed-ups for SubModST and we show their effectiveness on six example problems. We show that SubModST outperforms the state-of-the-art solvers developed by Csókás and Vinkó [J. Glob. Optim. '24] and Uematsu et al. [J. Oper. Res. Soc. Japan '20] for Submodular Cardinality-Constrained Maximization by orders of magnitudes.

Cite as

Henning Martin Woydt, Christian Komusiewicz, and Frank Sommer. SubModST: A Fast Generic Solver for Submodular Maximization with Size Constraints. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 102:1-102:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{woydt_et_al:LIPIcs.ESA.2024.102,
  author =	{Woydt, Henning Martin and Komusiewicz, Christian and Sommer, Frank},
  title =	{{SubModST: A Fast Generic Solver for Submodular Maximization with Size Constraints}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{102:1--102:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.102},
  URN =		{urn:nbn:de:0030-drops-211730},
  doi =		{10.4230/LIPIcs.ESA.2024.102},
  annote =	{Keywords: Branch-and-Cut, Lazy Evaluations, Facility Location, Group Closeness Centrality, Partial Dominating Set}
}
Document
Investigating Wrench Attacks: Physical Attacks Targeting Cryptocurrency Users

Authors: Marilyne Ordekian, Gilberto Atondo-Siu, Alice Hutchings, and Marie Vasek

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Cryptocurrency wrench attacks are physical attacks targeting cryptocurrency users in the real world to illegally obtain cryptocurrencies. These attacks significantly undermine the efficacy of existing digital security norms when confronted with real-world threats. We present the first comprehensive study on wrench attacks. We propose a theoretical approach to defining wrench attacks per criminal law norms, and an interdisciplinary empirical approach to measure their incidence. Leveraging three data sources, we perform crime script analysis, detecting incidents globally across 10 interviews with victims and experts, 146 news articles, and 37 online forums. Our findings reveal diverse groups of attackers ranging from organized crime groups to friends and family, various modi operandi, and different forms of attacks varying from blackmail to murder. Despite existing since Bitcoin’s early days, these attacks are underreported due to revictimization fears. Additionally, unlike other cryptocurrency crimes, users with advanced security experience were not immune to them. We identify potential vulnerabilities in users' behavior and encourage cryptocurrency holders to lean into digital as well as physical safety measures to protect themselves and their cryptocurrency. We offer actionable recommendations for the security community and regulators, highlighting the double-edged sword of Know Your Customer policies.

Cite as

Marilyne Ordekian, Gilberto Atondo-Siu, Alice Hutchings, and Marie Vasek. Investigating Wrench Attacks: Physical Attacks Targeting Cryptocurrency Users. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 24:1-24:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ordekian_et_al:LIPIcs.AFT.2024.24,
  author =	{Ordekian, Marilyne and Atondo-Siu, Gilberto and Hutchings, Alice and Vasek, Marie},
  title =	{{Investigating Wrench Attacks: Physical Attacks Targeting Cryptocurrency Users}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{24:1--24:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.24},
  URN =		{urn:nbn:de:0030-drops-209609},
  doi =		{10.4230/LIPIcs.AFT.2024.24},
  annote =	{Keywords: cryptocurrency, Bitcoin, crime, wrench attack, physical attack}
}
Document
APPROX
On the NP-Hardness Approximation Curve for Max-2Lin(2)

Authors: Björn Martinsson

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
In the Max-2Lin(2) problem you are given a system of equations on the form x_i + x_j ≡ b mod 2, and your objective is to find an assignment that satisfies as many equations as possible. Let c ∈ [0.5, 1] denote the maximum fraction of satisfiable equations. In this paper we construct a curve s (c) such that it is NP-hard to find a solution satisfying at least a fraction s of equations. This curve either matches or improves all of the previously known inapproximability NP-hardness results for Max-2Lin(2). In particular, we show that if c ⩾ 0.9232 then (1 - s(c))/(1 - c) > 1.48969, which improves the NP-hardness inapproximability constant for the min deletion version of Max-2Lin(2). Our work complements the work of O'Donnell and Wu that studied the same question assuming the Unique Games Conjecture. Similar to earlier inapproximability results for Max-2Lin(2), we use a gadget reduction from the (2^k - 1)-ary Hadamard predicate. Previous works used k ranging from 2 to 4. Our main result is a procedure for taking a gadget for some fixed k, and use it as a building block to construct better and better gadgets as k tends to infinity. Our method can be used to boost the result of both smaller gadgets created by hand (k = 3) or larger gadgets constructed using a computer (k = 4).

Cite as

Björn Martinsson. On the NP-Hardness Approximation Curve for Max-2Lin(2). In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 11:1-11:38, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{martinsson:LIPIcs.APPROX/RANDOM.2024.11,
  author =	{Martinsson, Bj\"{o}rn},
  title =	{{On the NP-Hardness Approximation Curve for Max-2Lin(2)}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{11:1--11:38},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.11},
  URN =		{urn:nbn:de:0030-drops-210049},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.11},
  annote =	{Keywords: Inapproximability, NP-hardness, 2Lin(2), Max-Cut, Gadget}
}
Document
RANDOM
Faster Algorithms for Schatten-p Low Rank Approximation

Authors: Praneeth Kacham and David P. Woodruff

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We study algorithms for the Schatten-p Low Rank Approximation (LRA) problem. First, we show that by using fast rectangular matrix multiplication algorithms and different block sizes, we can improve the running time of the algorithms in the recent work of Bakshi, Clarkson and Woodruff (STOC 2022). We then show that by carefully combining our new algorithm with the algorithm of Li and Woodruff (ICML 2020), we can obtain even faster algorithms for Schatten-p LRA. While the block-based algorithms are fast in the real number model, we do not have a stability analysis which shows that the algorithms work when implemented on a machine with polylogarithmic bits of precision. We show that the LazySVD algorithm of Allen-Zhu and Li (NeurIPS 2016) can be implemented on a floating point machine with only logarithmic, in the input parameters, bits of precision. As far as we are aware, this is the first stability analysis of any algorithm using O((k/√ε)poly(log n)) matrix-vector products with the matrix A to output a 1+ε approximate solution for the rank-k Schatten-p LRA problem.

Cite as

Praneeth Kacham and David P. Woodruff. Faster Algorithms for Schatten-p Low Rank Approximation. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 55:1-55:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kacham_et_al:LIPIcs.APPROX/RANDOM.2024.55,
  author =	{Kacham, Praneeth and Woodruff, David P.},
  title =	{{Faster Algorithms for Schatten-p Low Rank Approximation}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{55:1--55:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.55},
  URN =		{urn:nbn:de:0030-drops-210488},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.55},
  annote =	{Keywords: Low Rank Approximation, Schatten Norm, Rectangular Matrix Multiplication, Stability Analysis}
}
Document
DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance

Authors: Rundong Gan, Liyi Zhou, Le Wang, Kaihua Qin, and Xiaodong Lin

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Decentralized Finance (DeFi) has witnessed a monumental surge, reaching 53.039 billion USD in total value locked. As this sector continues to expand, ensuring the reliability of DeFi smart contracts becomes increasingly crucial. While some users are adept at reading code or the compiled bytecode to understand smart contracts, many rely on documentation. Therefore, discrepancies between the documentation and the deployed code can pose significant risks, whether these discrepancies are due to errors or intentional fraud. To tackle these challenges, we developed DeFiAligner, an end-to-end system to identify inconsistencies between documentation and smart contracts. DeFiAligner incorporates a symbolic execution tool, SEVM, which explores execution paths of on-chain binary code, recording memory and stack states. It automatically generates symbolic expressions for token balance changes and branch conditions, which, along with related project documents, are processed by LLMs. Using structured prompts, the LLMs evaluate the alignment between the symbolic expressions and the documentation. Our tests across three distinct scenarios demonstrate DeFiAligner’s capability to automate inconsistency detection in DeFi, achieving recall rates of 92% and 90% on two public datasets respectively.

Cite as

Rundong Gan, Liyi Zhou, Le Wang, Kaihua Qin, and Xiaodong Lin. DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 7:1-7:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gan_et_al:LIPIcs.AFT.2024.7,
  author =	{Gan, Rundong and Zhou, Liyi and Wang, Le and Qin, Kaihua and Lin, Xiaodong},
  title =	{{DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{7:1--7:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.7},
  URN =		{urn:nbn:de:0030-drops-209431},
  doi =		{10.4230/LIPIcs.AFT.2024.7},
  annote =	{Keywords: Decentralized Finance Security, Large Language Models, Project Review, Symbolic Analysis, Smart Contracts}
}
Document
HOBBIT: Hashed OBject Based InTegrity

Authors: Matthias Bernad and Stefan Brunthaler

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
C vulnerabilities usually hold verbatim for C++ programs. The counterfeit-object-oriented programming attack demonstrated that this relation is asymmetric, i.e., it only applies to C++. The problem pinpointed by this COOP attack is that C++ does not validate the integrity of its objects. By injecting malicious objects with manipulated virtual function table pointers, attackers can hijack control-flow of programs. The software security community addressed the COOP-problem in the years following its discovery, but together with the emergence of transient-execution attacks, such as Spectre, researchers also shifted their attention. We present Hobbit, a software-only solution to prevent COOP attacks by validating object integrity for virtual function pointer tables. Hobbit does not require any hardware specific features, scales to multi-million lines of C++ source code, and our LLVM-based implementation offers a configurable performance impact between 121.63% and 2.80% on compute-intensive SPEC CPU C++ benchmarks. Hobbit’s security analysis indicates strong resistance to brute forcing attacks and demonstrates additional benefits of using execute-only memory.

Cite as

Matthias Bernad and Stefan Brunthaler. HOBBIT: Hashed OBject Based InTegrity. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 7:1-7:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bernad_et_al:LIPIcs.ECOOP.2024.7,
  author =	{Bernad, Matthias and Brunthaler, Stefan},
  title =	{{HOBBIT: Hashed OBject Based InTegrity}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{7:1--7:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.7},
  URN =		{urn:nbn:de:0030-drops-208566},
  doi =		{10.4230/LIPIcs.ECOOP.2024.7},
  annote =	{Keywords: software security, code-reuse attacks, language-based security, counterfeit-object-oriented programming, object integrity, compiler security}
}
Document
Mover Logic: A Concurrent Program Logic for Reduction and Rely-Guarantee Reasoning

Authors: Cormac Flanagan and Stephen N. Freund

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Rely-guarantee (RG) logic uses thread interference specifications (relies and guarantees) to reason about the correctness of multithreaded software. Unfortunately, RG logic requires each function postcondition to be "stabilized" or specialized to the behavior of other threads, making it difficult to write function specifications that are reusable at multiple call sites. This paper presents mover logic, which extends RG logic to address this problem via the notion of atomic functions. Atomic functions behave as if they execute serially without interference from concurrent threads, and so they can be assigned more general and reusable specifications that avoid the stabilization requirement of RG logic. Several practical verifiers (Calvin-R, QED, CIVL, Armada, Anchor, etc.) have demonstrated the modularity benefits of atomic function specifications. However, the complexity of these systems and their correctness proofs makes it challenging to understand and extend these systems. Mover logic formalizes the central ideas of reduction in a declarative program logic that provides a foundation for future work in this area.

Cite as

Cormac Flanagan and Stephen N. Freund. Mover Logic: A Concurrent Program Logic for Reduction and Rely-Guarantee Reasoning. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 16:1-16:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{flanagan_et_al:LIPIcs.ECOOP.2024.16,
  author =	{Flanagan, Cormac and Freund, Stephen N.},
  title =	{{Mover Logic: A Concurrent Program Logic for Reduction and Rely-Guarantee Reasoning}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{16:1--16:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.16},
  URN =		{urn:nbn:de:0030-drops-208654},
  doi =		{10.4230/LIPIcs.ECOOP.2024.16},
  annote =	{Keywords: concurrent program verification, reduction, rely-guarantee reasoning, synchronization}
}
Document
Qafny: A Quantum-Program Verifier

Authors: Liyi Li, Mingwei Zhu, Rance Cleaveland, Alexander Nicolellis, Yi Lee, Le Chang, and Xiaodi Wu

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Because of the probabilistic/nondeterministic behavior of quantum programs, it is highly advisable to verify them formally to ensure that they correctly implement their specifications. Formal verification, however, also traditionally requires significant effort. To address this challenge, we present Qafny, an automated proof system based on the program verifier Dafny and designed for verifying quantum programs. At its core, Qafny uses a type-guided quantum proof system that translates quantum operations to classical array operations modeled within a classical separation logic framework. We prove the soundness and completeness of our proof system and implement a prototype compiler that transforms Qafny programs and specifications into Dafny for automated verification purposes. We then illustrate the utility of Qafny’s automated capabilities in efficiently verifying important quantum algorithms, including quantum-walk algorithms, Grover’s algorithm, and Shor’s algorithm.

Cite as

Liyi Li, Mingwei Zhu, Rance Cleaveland, Alexander Nicolellis, Yi Lee, Le Chang, and Xiaodi Wu. Qafny: A Quantum-Program Verifier. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 24:1-24:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ECOOP.2024.24,
  author =	{Li, Liyi and Zhu, Mingwei and Cleaveland, Rance and Nicolellis, Alexander and Lee, Yi and Chang, Le and Wu, Xiaodi},
  title =	{{Qafny: A Quantum-Program Verifier}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{24:1--24:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.24},
  URN =		{urn:nbn:de:0030-drops-208735},
  doi =		{10.4230/LIPIcs.ECOOP.2024.24},
  annote =	{Keywords: Quantum Computing, Automated Verification, Separation Logic}
}
Document
Generalizing Shape Analysis with Gradual Types

Authors: Zeina Migeed, James Reed, Jason Ansel, and Jens Palsberg

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Tensors are multi-dimensional data structures that can represent the data processed by machine learning tasks. Tensor programs tend to be short and readable, and they can leverage libraries and frameworks such as TensorFlow and PyTorch, as well as modern hardware such as GPUs and TPUs. However, tensor programs also tend to obscure shape information, which can cause shape errors that are difficult to find. Such shape errors can be avoided by a combination of shape annotations and shape analysis, but such annotations are burdensome to come up with manually. In this paper, we use gradual typing to reduce the barrier of entry. Gradual typing offers a way to incrementally introduce type annotations into programs. From there, we focus on tool support for type migration, which is a concept that closely models code-annotation tasks and allows us to do shape reasoning and utilize it for different purposes. We develop a comprehensive gradual typing theory to reason about tensor shapes. We then ask three fundamental questions about a gradually typed tensor program. (1) Does the program have a static migration? (2) Given a program and some arithmetic constraints on shapes, can we migrate the program according to the constraints? (3) Can we eliminate branches that depend on shapes? We develop novel tools to address the three problems. For the third problem, there are currently two PyTorch tools that aim to eliminate branches. They do so by eliminating them for just a single input. Our tool is the first to eliminate branches for an infinite class of inputs, using static shape information. Our tools help prevent bugs, alleviate the burden on the programmer of annotating the program, and improves the process of program transformation.

Cite as

Zeina Migeed, James Reed, Jason Ansel, and Jens Palsberg. Generalizing Shape Analysis with Gradual Types. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 29:1-29:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{migeed_et_al:LIPIcs.ECOOP.2024.29,
  author =	{Migeed, Zeina and Reed, James and Ansel, Jason and Palsberg, Jens},
  title =	{{Generalizing Shape Analysis with Gradual Types}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{29:1--29:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.29},
  URN =		{urn:nbn:de:0030-drops-208786},
  doi =		{10.4230/LIPIcs.ECOOP.2024.29},
  annote =	{Keywords: Tensor Shapes, Gradual Types, Migration}
}
Document
Failure Transparency in Stateful Dataflow Systems

Authors: Aleksey Veresov, Jonas Spenger, Paris Carbone, and Philipp Haller

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Failure transparency enables users to reason about distributed systems at a higher level of abstraction, where complex failure-handling logic is hidden. This is especially true for stateful dataflow systems, which are the backbone of many cloud applications. In particular, this paper focuses on proving failure transparency in Apache Flink, a popular stateful dataflow system. Even though failure transparency is a critical aspect of Apache Flink, to date it has not been formally proven. Showing that the failure transparency mechanism is correct, however, is challenging due to the complexity of the mechanism itself. Nevertheless, this complexity can be effectively hidden behind a failure transparent programming interface. To show that Apache Flink is failure transparent, we model it in small-step operational semantics. Next, we provide a novel definition of failure transparency based on observational explainability, a concept which relates executions according to their observations. Finally, we provide a formal proof of failure transparency for the implementation model; i.e., we prove that the failure-free model correctly abstracts from the failure-related details of the implementation model. We also show liveness of the implementation model under a fair execution assumption. These results are a first step towards a verified stack for stateful dataflow systems.

Cite as

Aleksey Veresov, Jonas Spenger, Paris Carbone, and Philipp Haller. Failure Transparency in Stateful Dataflow Systems. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 42:1-42:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{veresov_et_al:LIPIcs.ECOOP.2024.42,
  author =	{Veresov, Aleksey and Spenger, Jonas and Carbone, Paris and Haller, Philipp},
  title =	{{Failure Transparency in Stateful Dataflow Systems}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{42:1--42:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.42},
  URN =		{urn:nbn:de:0030-drops-208911},
  doi =		{10.4230/LIPIcs.ECOOP.2024.42},
  annote =	{Keywords: Failure transparency, stateful dataflow, operational semantics, checkpoint recovery}
}
Document
Artifact
Qafny: A Quantum-Program Verifier (Artifact)

Authors: Liyi Li, Mingwei Zhu, Rance Cleaveland, Alexander Nicolellis, Yi Lee, Le Chang, and Xiaodi Wu

Published in: DARTS, Volume 10, Issue 2, Special Issue of the 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
This artifact contains the Coq theory files for the Qafny proof system, including the formalism of the Qafny syntax, semantics, type system, and proof system, with the theorem proofs of type soundness, proof system soundness and completeness. It also contains a the compiled Dafny example programs generated from our Qafny-to-Dafny prototype compiler. These example programs serve as the validations of our Qafny-to-Dafny prototype compiler mechanism. The main work is introduced in the Qafny paper, which develops a separation logic style verification framework for quantum programs.

Cite as

Liyi Li, Mingwei Zhu, Rance Cleaveland, Alexander Nicolellis, Yi Lee, Le Chang, and Xiaodi Wu. Qafny: A Quantum-Program Verifier (Artifact). In Special Issue of the 38th European Conference on Object-Oriented Programming (ECOOP 2024). Dagstuhl Artifacts Series (DARTS), Volume 10, Issue 2, pp. 12:1-12:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{li_et_al:DARTS.10.2.12,
  author =	{Li, Liyi and Zhu, Mingwei and Cleaveland, Rance and Nicolellis, Alexander and Lee, Yi and Chang, Le and Wu, Xiaodi},
  title =	{{Qafny: A Quantum-Program Verifier (Artifact)}},
  pages =	{12:1--12:2},
  journal =	{Dagstuhl Artifacts Series},
  ISBN =	{978-3-95977-342-3},
  ISSN =	{2509-8195},
  year =	{2024},
  volume =	{10},
  number =	{2},
  editor =	{Li, Liyi and Zhu, Mingwei and Cleaveland, Rance and Nicolellis, Alexander and Lee, Yi and Chang, Le and Wu, Xiaodi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DARTS.10.2.12},
  URN =		{urn:nbn:de:0030-drops-209104},
  doi =		{10.4230/DARTS.10.2.12},
  annote =	{Keywords: Quantum Computing, Automated Verification, Separation Logic}
}
Document
Geometric Enumeration of Localized DNA Strand Displacement Reaction Networks

Authors: Matthew R. Lakin and Sarika Kumar

Published in: LIPIcs, Volume 314, 30th International Conference on DNA Computing and Molecular Programming (DNA 30) (2024)


Abstract
Localized molecular devices are a powerful tool for engineering complex information-processing circuits and molecular robots. Their practical advantages include speed and scalability of interactions between components tethered near to each other on an underlying nanostructure, and the ability to restrict interactions between more distant components. The latter is a critical feature that must be factored into computational tools for the design and simulation of localized molecular devices: unlike in solution-phase systems, the geometries of molecular interactions must be accounted for when attempting to determine the network of possible reactions in a tethered molecular system. This work aims to address that challenge by integrating, for the first time, automated approaches to analysis of molecular geometry with reaction enumeration algorithms for DNA strand displacement reaction networks that can be applied to tethered molecular systems. By adapting a simple approach to solving the biophysical constraints inherent in molecular interactions to be applicable to tethered systems, we produce a localized reaction enumeration system that enhances previous approaches to reaction enumeration in tethered system by not requiring users to explicitly specify the subsets of components that are capable of interacting. This greatly simplifies the user’s task and could also be used as the basis of future systems for automated placement or routing of signal-transmission and logical processing in molecular devices. We apply this system to several published example systems from the literature, including both tethered molecular logic systems and molecular robots.

Cite as

Matthew R. Lakin and Sarika Kumar. Geometric Enumeration of Localized DNA Strand Displacement Reaction Networks. In 30th International Conference on DNA Computing and Molecular Programming (DNA 30). Leibniz International Proceedings in Informatics (LIPIcs), Volume 314, pp. 1:1-1:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{lakin_et_al:LIPIcs.DNA.30.1,
  author =	{Lakin, Matthew R. and Kumar, Sarika},
  title =	{{Geometric Enumeration of Localized DNA Strand Displacement Reaction Networks}},
  booktitle =	{30th International Conference on DNA Computing and Molecular Programming (DNA 30)},
  pages =	{1:1--1:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-344-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{314},
  editor =	{Seki, Shinnosuke and Stewart, Jaimie Marie},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DNA.30.1},
  URN =		{urn:nbn:de:0030-drops-209294},
  doi =		{10.4230/LIPIcs.DNA.30.1},
  annote =	{Keywords: Localized circuits, reaction enumeration, DNA strand displacement, geometry, molecular computing}
}
Document
Is Familiarity Reflected in the Spatial Knowledge Revealed by Sketch Maps?

Authors: Markus Kattenbeck, Daniel R. Montello, Martin Raubal, and Ioannis Giannopoulos

Published in: LIPIcs, Volume 315, 16th International Conference on Spatial Information Theory (COSIT 2024)


Abstract
Despite the frequent use of sketch maps in assessing environmental knowledge, it remains unclear how and to what degree familiarity impacts sketch map content. In the present study, we assess whether different levels of familiarity relate to differences in the content and spatial accuracy of environmental knowledge depicted in sketch maps drawn for the purpose of route instructions. To this end, we conduct a real-world wayfinding study with 91 participants, all of whom have to walk along a pre-defined route of approximately 2.3km length. Prior to the walk, we collect self-report familiarity ratings from participants for both a set of 15 landmarks and a set of areas we define as hexagons along the route. Once participants finished walking the route, they were asked to sketch a map of the route, specifically a sketch that would enable a person who had never walked the route to follow it. We found that participants unfamiliar with the areas along the route sketched fewer features than familiar people did. Contrary to our expectations, however, we found that landmarks were sketched or not regardless of participants' level of familiarity with the landmarks. We were also surprised that the level of familiarity was not correlated to the accuracy of the sketched order of features along the route, of the position of sketched features in relation to the route, nor to the metric locational accuracy of feature placement on the sketches. These results lead us to conclude that different aspects of feature salience influence whether the features are included on sketch maps, independent of familiarity. They also point to the influence of task context on the content of sketch maps, again independent of familiarity. We propose further studies to more fully explore these ideas.

Cite as

Markus Kattenbeck, Daniel R. Montello, Martin Raubal, and Ioannis Giannopoulos. Is Familiarity Reflected in the Spatial Knowledge Revealed by Sketch Maps?. In 16th International Conference on Spatial Information Theory (COSIT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 315, pp. 6:1-6:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kattenbeck_et_al:LIPIcs.COSIT.2024.6,
  author =	{Kattenbeck, Markus and Montello, Daniel R. and Raubal, Martin and Giannopoulos, Ioannis},
  title =	{{Is Familiarity Reflected in the Spatial Knowledge Revealed by Sketch Maps?}},
  booktitle =	{16th International Conference on Spatial Information Theory (COSIT 2024)},
  pages =	{6:1--6:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-330-0},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{315},
  editor =	{Adams, Benjamin and Griffin, Amy L. and Scheider, Simon and McKenzie, Grant},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.COSIT.2024.6},
  URN =		{urn:nbn:de:0030-drops-208215},
  doi =		{10.4230/LIPIcs.COSIT.2024.6},
  annote =	{Keywords: Familiarity, Spatial Knowledge, Sketch Maps}
}
Document
Short Paper
Evaluating the Ability of Large Language Models to Reason About Cardinal Directions (Short Paper)

Authors: Anthony G Cohn and Robert E Blackwell

Published in: LIPIcs, Volume 315, 16th International Conference on Spatial Information Theory (COSIT 2024)


Abstract
We investigate the abilities of a representative set of Large language Models (LLMs) to reason about cardinal directions (CDs). To do so, we create two datasets: the first, co-created with ChatGPT, focuses largely on recall of world knowledge about CDs; the second is generated from a set of templates, comprehensively testing an LLM’s ability to determine the correct CD given a particular scenario. The templates allow for a number of degrees of variation such as means of locomotion of the agent involved, and whether set in the first , second or third person. Even with a temperature setting of zero, Our experiments show that although LLMs are able to perform well in the simpler dataset, in the second more complex dataset no LLM is able to reliably determine the correct CD, even with a temperature setting of zero.

Cite as

Anthony G Cohn and Robert E Blackwell. Evaluating the Ability of Large Language Models to Reason About Cardinal Directions (Short Paper). In 16th International Conference on Spatial Information Theory (COSIT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 315, pp. 28:1-28:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cohn_et_al:LIPIcs.COSIT.2024.28,
  author =	{Cohn, Anthony G and Blackwell, Robert E},
  title =	{{Evaluating the Ability of Large Language Models to Reason About Cardinal Directions}},
  booktitle =	{16th International Conference on Spatial Information Theory (COSIT 2024)},
  pages =	{28:1--28:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-330-0},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{315},
  editor =	{Adams, Benjamin and Griffin, Amy L. and Scheider, Simon and McKenzie, Grant},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.COSIT.2024.28},
  URN =		{urn:nbn:de:0030-drops-208432},
  doi =		{10.4230/LIPIcs.COSIT.2024.28},
  annote =	{Keywords: Large Language Models, Spatial Reasoning, Cardinal Directions}
}
Document
Constraint Modelling with LLMs Using In-Context Learning

Authors: Kostis Michailidis, Dimos Tsouros, and Tias Guns

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
Constraint Programming (CP) allows for the modelling and solving of a wide range of combinatorial problems. However, modelling such problems using constraints over decision variables still requires significant expertise, both in conceptual thinking and syntactic use of modelling languages. In this work, we explore the potential of using pre-trained Large Language Models (LLMs) as coding assistants, to transform textual problem descriptions into concrete and executable CP specifications. We present different transformation pipelines with explicit intermediate representations, and we investigate the potential benefit of various retrieval-augmented example selection strategies for in-context learning. We evaluate our approach on 2 datasets from the literature, namely NL4Opt (optimisation) and Logic Grid Puzzles (satisfaction), and a heterogeneous set of exercises from a CP course. The results show that pre-trained LLMs have promising potential for initialising the modelling process, with retrieval-augmented in-context learning significantly enhancing their modelling capabilities.

Cite as

Kostis Michailidis, Dimos Tsouros, and Tias Guns. Constraint Modelling with LLMs Using In-Context Learning. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 20:1-20:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{michailidis_et_al:LIPIcs.CP.2024.20,
  author =	{Michailidis, Kostis and Tsouros, Dimos and Guns, Tias},
  title =	{{Constraint Modelling with LLMs Using In-Context Learning}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{20:1--20:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.20},
  URN =		{urn:nbn:de:0030-drops-207053},
  doi =		{10.4230/LIPIcs.CP.2024.20},
  annote =	{Keywords: Constraint Modelling, Constraint Acquisition, Constraint Programming, Large Language Models, In-Context Learning, Natural Language Processing, Named Entity Recognition, Retrieval-Augmented Generation, Optimisation}
}
  • Refine by Author
  • 2 Calbimonte, Jean-Paul
  • 2 Chang, Le
  • 2 Cleaveland, Rance
  • 2 Lee, Yi
  • 2 Li, Liyi
  • Show More...

  • Refine by Classification
  • 4 Theory of computation → Graph algorithms analysis
  • 3 Theory of computation → Constraint and logic programming
  • 3 Theory of computation → Facility location and clustering
  • 3 Theory of computation → Program verification
  • 2 Applied computing → Computational biology
  • Show More...

  • Refine by Keyword
  • 3 Large Language Models
  • 2 Automated Verification
  • 2 Lower bounds
  • 2 NP-hardness
  • 2 Quantum Computing
  • Show More...

  • Refine by Type
  • 43 document

  • Refine by Publication Year
  • 40 2024
  • 1 2014
  • 1 2021
  • 1 2023

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail