19 Search Results for "Yu, Albert"


Document
APPROX
On the Generalized Mean Densest Subgraph Problem: Complexity and Algorithms

Authors: Karthekeyan Chandrasekaran, Chandra Chekuri, Manuel R. Torres, and Weihao Zhu

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
Dense subgraph discovery is an important problem in graph mining and network analysis with several applications. Two canonical polynomial-time solvable problems here are to find a maxcore (subgraph of maximum min degree) and to find a densest subgraph (subgraph of maximum average degree). Both of these problems can be solved in polynomial time. Veldt, Benson, and Kleinberg [Veldt et al., 2021] introduced the generalized p-mean densest subgraph problem which captures the maxcore problem when p = -∞ and the densest subgraph problem when p = 1. They observed that for p ≥ 1, the objective function is supermodular and hence the problem can be solved in polynomial time. In this work, we focus on the p-mean densest subgraph problem for p ∈ (-∞, 1). We prove that for every p ∈ (-∞,1), the problem is NP-hard, thus resolving an open question from [Veldt et al., 2021]. We also show that for every p ∈ (0,1), the weighted version of the problem is APX-hard. On the algorithmic front, we describe two simple 1/2-approximation algorithms for every p ∈ (-∞, 1). We complement the approximation algorithms by exhibiting non-trivial instances on which the algorithms simultaneously achieve an approximation factor of at most 1/2.

Cite as

Karthekeyan Chandrasekaran, Chandra Chekuri, Manuel R. Torres, and Weihao Zhu. On the Generalized Mean Densest Subgraph Problem: Complexity and Algorithms. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 9:1-9:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chandrasekaran_et_al:LIPIcs.APPROX/RANDOM.2024.9,
  author =	{Chandrasekaran, Karthekeyan and Chekuri, Chandra and Torres, Manuel R. and Zhu, Weihao},
  title =	{{On the Generalized Mean Densest Subgraph Problem: Complexity and Algorithms}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{9:1--9:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.9},
  URN =		{urn:nbn:de:0030-drops-210025},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.9},
  annote =	{Keywords: Densest subgraph problem, Hardness of approximation, Approximation algorithms}
}
Document
SoK: Zero-Knowledge Range Proofs

Authors: Miranda Christ, Foteini Baldimtsi, Konstantinos Kryptos Chalkias, Deepak Maram, Arnab Roy, and Joy Wang

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Zero-knowledge range proofs (ZKRPs) allow a prover to convince a verifier that a secret value lies in a given interval. ZKRPs have numerous applications: from anonymous credentials and auctions, to confidential transactions in cryptocurrencies. At the same time, a plethora of ZKRP constructions exist in the literature, each with its own trade-offs. In this work, we systematize the knowledge around ZKRPs. We create a classification of existing constructions based on the underlying building techniques, and we summarize their properties. We provide comparisons between schemes both in terms of properties as well as efficiency levels, and construct a guideline to assist in the selection of an appropriate ZKRP for different application requirements. Finally, we discuss a number of interesting open research problems.

Cite as

Miranda Christ, Foteini Baldimtsi, Konstantinos Kryptos Chalkias, Deepak Maram, Arnab Roy, and Joy Wang. SoK: Zero-Knowledge Range Proofs. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 14:1-14:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{christ_et_al:LIPIcs.AFT.2024.14,
  author =	{Christ, Miranda and Baldimtsi, Foteini and Chalkias, Konstantinos Kryptos and Maram, Deepak and Roy, Arnab and Wang, Joy},
  title =	{{SoK: Zero-Knowledge Range Proofs}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{14:1--14:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.14},
  URN =		{urn:nbn:de:0030-drops-209504},
  doi =		{10.4230/LIPIcs.AFT.2024.14},
  annote =	{Keywords: Range proofs, zero knowledge}
}
Document
Taking a Closer Look: An Outlier-Driven Approach to Compilation-Time Optimization

Authors: Florian Huemer, David Leopoldseder, Aleksandar Prokopec, Raphael Mosaner, and Hanspeter Mössenböck

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Improving compilation time in optimizing compilers is challenging due to their large number of interconnected components. This includes compiler optimizations, compiler tiers, heuristics, and profiling information. Despite this complexity, research in compilation-time optimization is often guided by analyzing metrics of entire program runs, such as the total compilation time and overall memory footprint. This coarse-grained perspective hides relevant information, such as source program functions for which the compiler allocates a lot of memory or compiler optimizations with a high impact on the total compilation time. This leaves high-level metrics as the only reference point for driving optimization design. Consequently, compilation-time regressions in one program function that are obscured by improvements in other functions stay undetected, while the impacts of compiler changes on untouched parts of the compiler are mainly unknown. Furthermore, developers overlook long-standing compiler defects because their high-level metrics do not change over time. To address these limitations, we propose ICON, a new data-driven approach to compilation-time optimization that breaks up high-level metrics into individual source program functions, compiler optimizations, or even into individual instructions in the compiler source code. Our methodology enables an iterative in-depth compilation-time analysis, focusing on outliers to identify optimization opportunities. We show that outliers, both in terms of time spent in a particular compiler optimization, and in terms of individual compilations that take substantially longer, can reveal potential problems in the compiler implementation. We applied our approach to GraalVM and extracted data for multiple of its language runtimes. We analyzed the resulting data, present the first detailed look into the distribution of compilation time in the GraalVM compiler, a state-of-the-art multi-language compiler, and identified defects that led to regressions in overall compilation time or the compilation time of specific languages. We furthermore designed two optimizations based on the identified outliers that improve compilation time between 2.25% and 9.45%. We believe that our approach can guide compiler developers in finding usually overlooked optimization potential and defects, and focus future research efforts in making compilers more efficient.

Cite as

Florian Huemer, David Leopoldseder, Aleksandar Prokopec, Raphael Mosaner, and Hanspeter Mössenböck. Taking a Closer Look: An Outlier-Driven Approach to Compilation-Time Optimization. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 20:1-20:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{huemer_et_al:LIPIcs.ECOOP.2024.20,
  author =	{Huemer, Florian and Leopoldseder, David and Prokopec, Aleksandar and Mosaner, Raphael and M\"{o}ssenb\"{o}ck, Hanspeter},
  title =	{{Taking a Closer Look: An Outlier-Driven Approach to Compilation-Time Optimization}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{20:1--20:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.20},
  URN =		{urn:nbn:de:0030-drops-208693},
  doi =		{10.4230/LIPIcs.ECOOP.2024.20},
  annote =	{Keywords: Compilation time, outliers, dynamic languages, virtual machines, GraalVM, ICON}
}
Document
Tenspiler: A Verified-Lifting-Based Compiler for Tensor Operations

Authors: Jie Qiu, Colin Cai, Sahil Bhatia, Niranjan Hasabnis, Sanjit A. Seshia, and Alvin Cheung

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Tensor processing infrastructures such as deep learning frameworks and specialized hardware accelerators have revolutionized how computationally intensive code from domains such as deep learning and image processing is executed and optimized. These infrastructures provide powerful and expressive abstractions while ensuring high performance. However, to utilize them, code must be written specifically using the APIs / ISAs of such software frameworks or hardware accelerators. Importantly, given the fast pace of innovation in these domains, code written today quickly becomes legacy as new frameworks and accelerators are developed, and migrating such legacy code manually is a considerable effort. To enable developers in leveraging such DSLs while preserving their current programming paradigm, we present Tenspiler, a verified-lifting-based compiler that uses program synthesis to translate sequential programs written in general-purpose programming languages (e.g., C++ or Python code that does not leverage any specialized framework or accelerator) into tensor operations. Central to Tenspiler is our carefully crafted yet simple intermediate language, named TensIR, that expresses tensor operations. TensIR enables efficient lifting, verification, and code generation. Unlike classical pattern-matching-based compilers, Tenspiler uses program synthesis to translate input code into TensIR, which is then compiled to the target API / ISA. Currently, Tenspiler already supports six DSLs, spanning a broad spectrum of software and hardware environments. Furthermore, we show that new backends can be easily supported by Tenspiler by adding simple pattern-matching rules for TensIR. Using 10 real-world code benchmark suites, our experimental evaluation shows that by translating code to be executed on 6 different software frameworks and hardware devices, Tenspiler offers on average 105× kernel and 9.65× end-to-end execution time improvement over the fully-optimized sequential implementation of the same benchmarks.

Cite as

Jie Qiu, Colin Cai, Sahil Bhatia, Niranjan Hasabnis, Sanjit A. Seshia, and Alvin Cheung. Tenspiler: A Verified-Lifting-Based Compiler for Tensor Operations. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 32:1-32:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{qiu_et_al:LIPIcs.ECOOP.2024.32,
  author =	{Qiu, Jie and Cai, Colin and Bhatia, Sahil and Hasabnis, Niranjan and Seshia, Sanjit A. and Cheung, Alvin},
  title =	{{Tenspiler: A Verified-Lifting-Based Compiler for Tensor Operations}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{32:1--32:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.32},
  URN =		{urn:nbn:de:0030-drops-208817},
  doi =		{10.4230/LIPIcs.ECOOP.2024.32},
  annote =	{Keywords: Program Synthesis, Code Transpilation, Tensor DSLs, Verification}
}
Document
Qualitative Formalization of a Curve on a Two-Dimensional Plane

Authors: Kazuko Takahashi

Published in: LIPIcs, Volume 315, 16th International Conference on Spatial Information Theory (COSIT 2024)


Abstract
We propose a theoretical framework for qualitative spatial representation and reasoning about curves on a two-dimensional plane. We regard a curve as a sequence of segments, each of which has its own direction and convexity, and give a symbolic expression to it. We propose a reasoning method on this symbolic expression; when only a few segments of a curve are visible, we find missing segments by connecting them to create a global smooth continuous curve. In addition, we discuss whether the shape of the created curve can represent that of a real object; if the curve forms a spiral, such a curve is sometimes not appropriate as a border of an object. We show a method that judges the appropriateness of a curve, by considering the orientations of the segments.

Cite as

Kazuko Takahashi. Qualitative Formalization of a Curve on a Two-Dimensional Plane. In 16th International Conference on Spatial Information Theory (COSIT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 315, pp. 4:1-4:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{takahashi:LIPIcs.COSIT.2024.4,
  author =	{Takahashi, Kazuko},
  title =	{{Qualitative Formalization of a Curve on a Two-Dimensional Plane}},
  booktitle =	{16th International Conference on Spatial Information Theory (COSIT 2024)},
  pages =	{4:1--4:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-330-0},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{315},
  editor =	{Adams, Benjamin and Griffin, Amy L. and Scheider, Simon and McKenzie, Grant},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.COSIT.2024.4},
  URN =		{urn:nbn:de:0030-drops-208193},
  doi =		{10.4230/LIPIcs.COSIT.2024.4},
  annote =	{Keywords: qualitative spatial reasoning, knowledge representation, logical reasoning, shape information}
}
Document
Constraint Modelling with LLMs Using In-Context Learning

Authors: Kostis Michailidis, Dimos Tsouros, and Tias Guns

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
Constraint Programming (CP) allows for the modelling and solving of a wide range of combinatorial problems. However, modelling such problems using constraints over decision variables still requires significant expertise, both in conceptual thinking and syntactic use of modelling languages. In this work, we explore the potential of using pre-trained Large Language Models (LLMs) as coding assistants, to transform textual problem descriptions into concrete and executable CP specifications. We present different transformation pipelines with explicit intermediate representations, and we investigate the potential benefit of various retrieval-augmented example selection strategies for in-context learning. We evaluate our approach on 2 datasets from the literature, namely NL4Opt (optimisation) and Logic Grid Puzzles (satisfaction), and a heterogeneous set of exercises from a CP course. The results show that pre-trained LLMs have promising potential for initialising the modelling process, with retrieval-augmented in-context learning significantly enhancing their modelling capabilities.

Cite as

Kostis Michailidis, Dimos Tsouros, and Tias Guns. Constraint Modelling with LLMs Using In-Context Learning. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 20:1-20:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{michailidis_et_al:LIPIcs.CP.2024.20,
  author =	{Michailidis, Kostis and Tsouros, Dimos and Guns, Tias},
  title =	{{Constraint Modelling with LLMs Using In-Context Learning}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{20:1--20:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.20},
  URN =		{urn:nbn:de:0030-drops-207053},
  doi =		{10.4230/LIPIcs.CP.2024.20},
  annote =	{Keywords: Constraint Modelling, Constraint Acquisition, Constraint Programming, Large Language Models, In-Context Learning, Natural Language Processing, Named Entity Recognition, Retrieval-Augmented Generation, Optimisation}
}
Document
Short Paper
Minimizing Working-Group Conflicts in Conference Session Scheduling Through Maximum Satisfiability (Short Paper)

Authors: Sami Cherif, Heythem Sattoutah, Chu-Min Li, Corinne Lucet, and Laure Brisoux-Devendeville

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
This paper explores the application of Maximum Satisfiability (Max-SAT) to the complex problem of conference session scheduling, with a particular focus on minimizing working-group conflicts within the context of the ROADEF conference, the largest French-speaking event aimed at bringing together researchers from various fields such as combinatorial optimization and operational research. A Max-SAT model is introduced then enhanced with new variables, and solved through state-of-the-art solvers. The results of applying our formulation to data from ROADEF demonstrate its ability to effectively compute session schedules, while enabling to reduce the number of conflicts and the maximum number of parallel sessions compared to the handmade solutions proposed by the organizing committees. These findings underscore the potential of Max-SAT as a valuable tool for optimizing conference scheduling processes, offering a systematic and efficient solution that ensures a smoother and more productive experience for attendees and organizers alike.

Cite as

Sami Cherif, Heythem Sattoutah, Chu-Min Li, Corinne Lucet, and Laure Brisoux-Devendeville. Minimizing Working-Group Conflicts in Conference Session Scheduling Through Maximum Satisfiability (Short Paper). In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 34:1-34:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cherif_et_al:LIPIcs.CP.2024.34,
  author =	{Cherif, Sami and Sattoutah, Heythem and Li, Chu-Min and Lucet, Corinne and Brisoux-Devendeville, Laure},
  title =	{{Minimizing Working-Group Conflicts in Conference Session Scheduling Through Maximum Satisfiability}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{34:1--34:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.34},
  URN =		{urn:nbn:de:0030-drops-207190},
  doi =		{10.4230/LIPIcs.CP.2024.34},
  annote =	{Keywords: Maximum Satisfiability, Scheduling, Modeling}
}
Document
Memoization on Shared Subtrees Accelerates Computations on Genealogical Forests

Authors: Lukas Hübner and Alexandros Stamatakis

Published in: LIPIcs, Volume 312, 24th International Workshop on Algorithms in Bioinformatics (WABI 2024)


Abstract
The field of population genetics attempts to advance our understanding of evolutionary processes. It has applications, for example, in medical research, wildlife conservation, and - in conjunction with recent advances in ancient DNA sequencing technology - studying human migration patterns over the past few thousand years. The basic toolbox of population genetics includes genealogical trees, which describe the shared evolutionary history among individuals of the same species. They are calculated on the basis of genetic variations. However, in recombining organisms, a single tree is insufficient to describe the evolutionary history of the whole genome. Instead, a collection of correlated trees can be used, where each describes the evolutionary history of a consecutive region of the genome. The current corresponding state of-the-art data structure, tree sequences, compresses these genealogical trees via edit operations when moving from one tree to the next along the genome instead of storing the full, often redundant, description for each tree. We propose a new data structure, genealogical forests, which compresses the set of genealogical trees into a DAG. In this DAG identical subtrees that are shared across the input trees are encoded only once, thereby allowing for straight-forward memoization of intermediate results. Additionally, we provide a C++ implementation of our proposed data structure, called gfkit, which is 2.1 to 11.2 (median 4.0) times faster than the state-of-the-art tool on empirical and simulated datasets at computing important population genetics statistics such as the Allele Frequency Spectrum, Patterson’s f, the Fixation Index, Tajima’s D, pairwise Lowest Common Ancestors, and others. On Lowest Common Ancestor queries with more than two samples as input, gfkit scales asymptotically better than the state-of-the-art, and is thus up to 990 times faster. In conclusion, our proposed data structure compresses genealogical trees by storing shared subtrees only once, thereby enabling straight-forward memoization of intermediate results, yielding a substantial runtime reduction and a potentially more intuitive data representation over the state-of-the-art. Our improvements will boost the development of novel analyses and models in the field of population genetics and increases scalability to ever-growing genomic datasets.

Cite as

Lukas Hübner and Alexandros Stamatakis. Memoization on Shared Subtrees Accelerates Computations on Genealogical Forests. In 24th International Workshop on Algorithms in Bioinformatics (WABI 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 312, pp. 5:1-5:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hubner_et_al:LIPIcs.WABI.2024.5,
  author =	{H\"{u}bner, Lukas and Stamatakis, Alexandros},
  title =	{{Memoization on Shared Subtrees Accelerates Computations on Genealogical Forests}},
  booktitle =	{24th International Workshop on Algorithms in Bioinformatics (WABI 2024)},
  pages =	{5:1--5:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-340-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{312},
  editor =	{Pissis, Solon P. and Sung, Wing-Kin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2024.5},
  URN =		{urn:nbn:de:0030-drops-206499},
  doi =		{10.4230/LIPIcs.WABI.2024.5},
  annote =	{Keywords: bioinformatics, population genetics, algorithms}
}
Document
Stochastic Error Cancellation in Analog Quantum Simulation

Authors: Yiyi Cai, Yu Tong, and John Preskill

Published in: LIPIcs, Volume 310, 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)


Abstract
Analog quantum simulation is a promising path towards solving classically intractable problems in many-body physics on near-term quantum devices. However, the presence of noise limits the size of the system and the length of time that can be simulated. In our work, we consider an error model in which the actual Hamiltonian of the simulator differs from the target Hamiltonian we want to simulate by small local perturbations, which are assumed to be random and unbiased. We analyze the error accumulated in observables in this setting and show that, due to stochastic error cancellation, with high probability the error scales as the square root of the number of qubits instead of linearly. We explore the concentration phenomenon of this error as well as its implications for local observables in the thermodynamic limit. Moreover, we show that stochastic error cancellation also manifests in the fidelity between the target state at the end of time-evolution and the actual state we obtain in the presence of noise. This indicates that, to reach a certain fidelity, more noise can be tolerated than implied by the worst-case bound if the noise comes from many statistically independent sources.

Cite as

Yiyi Cai, Yu Tong, and John Preskill. Stochastic Error Cancellation in Analog Quantum Simulation. In 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 310, pp. 2:1-2:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.TQC.2024.2,
  author =	{Cai, Yiyi and Tong, Yu and Preskill, John},
  title =	{{Stochastic Error Cancellation in Analog Quantum Simulation}},
  booktitle =	{19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)},
  pages =	{2:1--2:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-328-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{310},
  editor =	{Magniez, Fr\'{e}d\'{e}ric and Grilo, Alex Bredariol},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2024.2},
  URN =		{urn:nbn:de:0030-drops-206720},
  doi =		{10.4230/LIPIcs.TQC.2024.2},
  annote =	{Keywords: Analog quantum simulation, error cancellation, concentration of measure}
}
Document
Consistent Ultrafinitist Logic

Authors: Michał J. Gajda

Published in: LIPIcs, Volume 303, 29th International Conference on Types for Proofs and Programs (TYPES 2023)


Abstract
Ultrafinitism postulates that we can only compute on relatively short objects, and numbers beyond a certain value are not available. This approach would also forbid many forms of infinitary reasoning and allow removing certain paradoxes stemming from enumeration theorems. For a computational application of ultrafinitist logic, we need more than a proof system, but a logical framework to express both proofs, programs, and theorems in a single framework. We present its inference rules, reduction relation, and self-encoding to allow direct proving of the properties of ultrafinitist logic within itself. We also provide a justification why it can express all bounded Turing programs, and thus serve as a "logic of computability".

Cite as

Michał J. Gajda. Consistent Ultrafinitist Logic. In 29th International Conference on Types for Proofs and Programs (TYPES 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 303, pp. 5:1-5:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gajda:LIPIcs.TYPES.2023.5,
  author =	{Gajda, Micha{\l} J.},
  title =	{{Consistent Ultrafinitist Logic}},
  booktitle =	{29th International Conference on Types for Proofs and Programs (TYPES 2023)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-332-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{303},
  editor =	{Kesner, Delia and Reyes, Eduardo Hermo and van den Berg, Benno},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TYPES.2023.5},
  URN =		{urn:nbn:de:0030-drops-204833},
  doi =		{10.4230/LIPIcs.TYPES.2023.5},
  annote =	{Keywords: ultrafinitism, bounded Turing completeness, logic of computability, decidable logic, explicit complexity, strict finitism}
}
Document
Clausal Congruence Closure

Authors: Armin Biere, Katalin Fazekas, Mathias Fleury, and Nils Froleyks

Published in: LIPIcs, Volume 305, 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)


Abstract
Many practical applications of satisfiability solving employ multiple steps to encode an original problem formulation into conjunctive normal form. Often circuits are used as intermediate representation before encoding those circuits into clausal form. These circuits however might contain redundant isomorphic sub-circuits. If blindly translated into clausal form, this redundancy is retained and increases solving time unless specific preprocessing algorithms are used. Furthermore, such redundant sub-formula structure might only emerge during solving and needs to be addressed by inprocessing. This paper presents a new approach which extracts gate information from the formula and applies congruence closure to match and eliminate redundant gates. Besides new algorithms for gate extraction, we also describe previous unpublished attempts to tackle this problem. Experiments focus on the important problem of combinational equivalence checking for hardware designs and show that our new approach yields a substantial gain in CNF solver performance.

Cite as

Armin Biere, Katalin Fazekas, Mathias Fleury, and Nils Froleyks. Clausal Congruence Closure. In 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 305, pp. 6:1-6:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{biere_et_al:LIPIcs.SAT.2024.6,
  author =	{Biere, Armin and Fazekas, Katalin and Fleury, Mathias and Froleyks, Nils},
  title =	{{Clausal Congruence Closure}},
  booktitle =	{27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)},
  pages =	{6:1--6:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-334-8},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{305},
  editor =	{Chakraborty, Supratik and Jiang, Jie-Hong Roland},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2024.6},
  URN =		{urn:nbn:de:0030-drops-205287},
  doi =		{10.4230/LIPIcs.SAT.2024.6},
  annote =	{Keywords: Satisfiability Solving, Congruence Closure, Structural Hashing, SAT Sweeping, Conjunctive Normal Form, Combinational Equivalence Checking, Hardware Equivalence Checking}
}
Document
Cooking String-Integer Conversions with Noodles

Authors: Vojtěch Havlena, Lukáš Holík, Ondřej Lengál, and Juraj Síč

Published in: LIPIcs, Volume 305, 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)


Abstract
We propose a method for efficient handling string constraints with string-integer conversions. It extends the recently introduced stabilization-based procedure for solving string (dis)equations with regular and length constraints. Our approach is to translate the conversions into a linear integer arithmetic formula, together with regular constraints and word equations. We have integrated it into the string solver Z3-Noodler, and our experiments show that it is competitive and on some established benchmarks even several orders of magnitude faster than the state of the art.

Cite as

Vojtěch Havlena, Lukáš Holík, Ondřej Lengál, and Juraj Síč. Cooking String-Integer Conversions with Noodles. In 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 305, pp. 14:1-14:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{havlena_et_al:LIPIcs.SAT.2024.14,
  author =	{Havlena, Vojt\v{e}ch and Hol{\'\i}k, Luk\'{a}\v{s} and Leng\'{a}l, Ond\v{r}ej and S{\'\i}\v{c}, Juraj},
  title =	{{Cooking String-Integer Conversions with Noodles}},
  booktitle =	{27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)},
  pages =	{14:1--14:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-334-8},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{305},
  editor =	{Chakraborty, Supratik and Jiang, Jie-Hong Roland},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2024.14},
  URN =		{urn:nbn:de:0030-drops-205365},
  doi =		{10.4230/LIPIcs.SAT.2024.14},
  annote =	{Keywords: string solving, string conversions, SMT solving}
}
Document
Quantum Circuit Mapping Based on Incremental and Parallel SAT Solving

Authors: Jiong Yang, Yaroslav A. Kharkov, Yunong Shi, Marijn J. H. Heule, and Bruno Dutertre

Published in: LIPIcs, Volume 305, 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)


Abstract
Quantum Computing (QC) is a new computational paradigm that promises significant speedup over classical computing in various domains. However, near-term QC faces numerous challenges, including limited qubit connectivity and noisy quantum operations. To address the qubit connectivity constraint, circuit mapping is required for executing quantum circuits on quantum computers. This process involves performing initial qubit placement and using the quantum SWAP operations to relocate non-adjacent qubits for nearest-neighbor interaction. Reducing the SWAP count in circuit mapping is essential for improving the success rate of quantum circuit execution as SWAPs are costly and error-prone. In this work, we introduce a novel circuit mapping method by combining incremental and parallel solving for Boolean Satisfiability (SAT). We present an innovative SAT encoding for circuit mapping problems, which significantly improves solver-based mapping methods and provides a smooth trade-off between compilation quality and compilation time. Through comprehensive benchmarking of 78 instances covering 3 quantum algorithms on 2 distinct quantum computer topologies, we demonstrate that our method is 26× faster than state-of-the-art solver-based methods, reducing the compilation time from hours to minutes for important quantum applications. Our method also surpasses the existing heuristics algorithm by 26% in SWAP count.

Cite as

Jiong Yang, Yaroslav A. Kharkov, Yunong Shi, Marijn J. H. Heule, and Bruno Dutertre. Quantum Circuit Mapping Based on Incremental and Parallel SAT Solving. In 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 305, pp. 29:1-29:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{yang_et_al:LIPIcs.SAT.2024.29,
  author =	{Yang, Jiong and Kharkov, Yaroslav A. and Shi, Yunong and Heule, Marijn J. H. and Dutertre, Bruno},
  title =	{{Quantum Circuit Mapping Based on Incremental and Parallel SAT Solving}},
  booktitle =	{27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)},
  pages =	{29:1--29:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-334-8},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{305},
  editor =	{Chakraborty, Supratik and Jiang, Jie-Hong Roland},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2024.29},
  URN =		{urn:nbn:de:0030-drops-205517},
  doi =		{10.4230/LIPIcs.SAT.2024.29},
  annote =	{Keywords: Quantum computing, Quantum compilation, SAT solving, Incremental solving, Parallel solving}
}
Document
Public-Key Pseudoentanglement and the Hardness of Learning Ground State Entanglement Structure

Authors: Adam Bouland, Bill Fefferman, Soumik Ghosh, Tony Metger, Umesh Vazirani, Chenyi Zhang, and Zixin Zhou

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Given a local Hamiltonian, how difficult is it to determine the entanglement structure of its ground state? We show that this problem is computationally intractable even if one is only trying to decide if the ground state is volume-law vs near area-law entangled. We prove this by constructing strong forms of pseudoentanglement in a public-key setting, where the circuits used to prepare the states are public knowledge. In particular, we construct two families of quantum circuits which produce volume-law vs near area-law entangled states, but nonetheless the classical descriptions of the circuits are indistinguishable under the Learning with Errors (LWE) assumption. Indistinguishability of the circuits then allows us to translate our construction to Hamiltonians. Our work opens new directions in Hamiltonian complexity, for example whether it is difficult to learn certain phases of matter.

Cite as

Adam Bouland, Bill Fefferman, Soumik Ghosh, Tony Metger, Umesh Vazirani, Chenyi Zhang, and Zixin Zhou. Public-Key Pseudoentanglement and the Hardness of Learning Ground State Entanglement Structure. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 21:1-21:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bouland_et_al:LIPIcs.CCC.2024.21,
  author =	{Bouland, Adam and Fefferman, Bill and Ghosh, Soumik and Metger, Tony and Vazirani, Umesh and Zhang, Chenyi and Zhou, Zixin},
  title =	{{Public-Key Pseudoentanglement and the Hardness of Learning Ground State Entanglement Structure}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{21:1--21:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.21},
  URN =		{urn:nbn:de:0030-drops-204175},
  doi =		{10.4230/LIPIcs.CCC.2024.21},
  annote =	{Keywords: Quantum computing, Quantum complexity theory, entanglement}
}
Document
Track A: Algorithms, Complexity and Games
Non-Linear Paging

Authors: Ilan Doron-Arad and Joseph (Seffi) Naor

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We formulate and study non-linear paging - a broad model of online paging where the size of subsets of pages is determined by a monotone non-linear set function of the pages. This model captures the well-studied classic weighted paging and generalized paging problems, and also submodular and supermodular paging, studied here for the first time, that have a range of applications from virtual memory to machine learning. Unlike classic paging, the cache threshold parameter k does not yield good competitive ratios for non-linear paging. Instead, we introduce a novel parameter 𝓁 that generalizes the notion of cache size to the non-linear setting. We obtain a tight deterministic 𝓁-competitive algorithm for general non-linear paging and a o(log²𝓁)-competitive lower bound for randomized algorithms. Our algorithm is based on a new generic LP for the problem that captures both submodular and supermodular paging, in contrast to LPs used for submodular cover settings. We finally focus on the supermodular paging problem, which is a variant of online set cover and online submodular cover, where sets are repeatedly requested to be removed from the cover. We obtain polylogarithmic lower and upper bounds and an offline approximation algorithm.

Cite as

Ilan Doron-Arad and Joseph (Seffi) Naor. Non-Linear Paging. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 57:1-57:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{doronarad_et_al:LIPIcs.ICALP.2024.57,
  author =	{Doron-Arad, Ilan and Naor, Joseph (Seffi)},
  title =	{{Non-Linear Paging}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{57:1--57:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.57},
  URN =		{urn:nbn:de:0030-drops-202000},
  doi =		{10.4230/LIPIcs.ICALP.2024.57},
  annote =	{Keywords: paging, competitive analysis, non-linear paging, submodular and supermodular functions}
}
  • Refine by Author
  • 1 Baldimtsi, Foteini
  • 1 Bhatia, Sahil
  • 1 Biere, Armin
  • 1 Bonte, Pieter
  • 1 Bouland, Adam
  • Show More...

  • Refine by Classification
  • 2 Computing methodologies → Artificial intelligence
  • 2 Computing methodologies → Symbolic and algebraic algorithms
  • 2 Software and its engineering → Compilers
  • 1 Applied computing → Bioinformatics
  • 1 Applied computing → Computational genomics
  • Show More...

  • Refine by Keyword
  • 2 Quantum computing
  • 1 Analog quantum simulation
  • 1 Approximation algorithms
  • 1 Code Transpilation
  • 1 Combinational Equivalence Checking
  • Show More...

  • Refine by Type
  • 19 document

  • Refine by Publication Year
  • 18 2024
  • 1 2022

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail