16 Search Results for "Lin, Yu"


Document
Tight Bounds for Quantum Phase Estimation and Related Problems

Authors: Nikhil S. Mande and Ronald de Wolf

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
Phase estimation, due to Kitaev [arXiv'95], is one of the most fundamental subroutines in quantum computing, used in Shor’s factoring algorithm, optimization algorithms, quantum chemistry algorithms, and many others. In the basic scenario, one is given black-box access to a unitary U, and an eigenstate |ψ⟩ of U with unknown eigenvalue e^{iθ}, and the task is to estimate the eigenphase θ within ±δ, with high probability. The repeated application of U and U^{-1} is typically the most expensive part of phase estimation, so for us the cost of an algorithm will be that number of applications. Motivated by the "guided Hamiltonian problem" in quantum chemistry, we tightly characterize the cost of several variants of phase estimation where we are no longer given an arbitrary eigenstate, but are required to estimate the maximum eigenphase of U, aided by advice in the form of states (or a unitary preparing those states) which are promised to have at least a certain overlap γ with the top eigenspace. We give algorithms and matching lower bounds (up to logarithmic factors) for all ranges of parameters. We show a crossover point below which advice is not helpful: o(1/γ²) copies of the advice state (or o(1/γ) applications of an advice-preparing unitary) are not significantly better than having no advice at all. We also show that having knowledge of the eigenbasis of U does not significantly reduce cost. Our upper bounds use the subroutine of generalized maximum-finding of van Apeldoorn, Gilyén, Gribling, and de Wolf [Quantum'20], the state-based Hamiltonian simulation of Lloyd, Mohseni, and Rebentrost [Nature Physics'13], and several other techniques. Our lower bounds follow by reductions from a fractional version of the Boolean OR function with advice, which we lower bound by a simple modification of the adversary method of Ambainis [JCSS'02]. As an immediate consequence we also obtain a lower bound on the complexity of the Unitary recurrence time problem, matching an upper bound of She and Yuen [ITCS'23] and resolving an open question posed by them. Lastly, we study how efficiently one can reduce the error probability in the basic phase-estimation scenario. We show that an algorithm solving phase estimation to precision δ with error probability at most ε must have cost Ω(1/δ log(1/ε)), matching the obvious way to error-reduce the basic constant-error-probability phase estimation algorithm. This contrasts with some other scenarios in quantum computing (e.g. search) where error-reduction costs only a factor O(√{log(1/ε)}). Our lower bound technique uses a variant of the polynomial method with trigonometric polynomials.

Cite as

Nikhil S. Mande and Ronald de Wolf. Tight Bounds for Quantum Phase Estimation and Related Problems. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 81:1-81:16, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{mande_et_al:LIPIcs.ESA.2023.81,
  author =	{Mande, Nikhil S. and de Wolf, Ronald},
  title =	{{Tight Bounds for Quantum Phase Estimation and Related Problems}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{81:1--81:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.81},
  URN =		{urn:nbn:de:0030-drops-187346},
  doi =		{10.4230/LIPIcs.ESA.2023.81},
  annote =	{Keywords: Phase estimation, quantum computing}
}
Document
On the Impossibility of General Parallel Fast-Forwarding of Hamiltonian Simulation

Authors: Nai-Hui Chia, Kai-Min Chung, Yao-Ching Hsieh, Han-Hsuan Lin, Yao-Ting Lin, and Yu-Ching Shen

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
Hamiltonian simulation is one of the most important problems in the field of quantum computing. There have been extended efforts on designing algorithms for faster simulation, and the evolution time T for the simulation greatly affect algorithm runtime as expected. While there are some specific types of Hamiltonians that can be fast-forwarded, i.e., simulated within time o(T), for some large classes of Hamiltonians (e.g., all local/sparse Hamiltonians), existing simulation algorithms require running time at least linear in the evolution time T. On the other hand, while there exist lower bounds of Ω(T) circuit size for some large classes of Hamiltonian, these lower bounds do not rule out the possibilities of Hamiltonian simulation with large but "low-depth" circuits by running things in parallel. As a result, physical systems with system size scaling with T can potentially do a fast-forwarding simulation. Therefore, it is intriguing whether we can achieve fast Hamiltonian simulation with the power of parallelism. In this work, we give a negative result for the above open problem in various settings. In the oracle model, we prove that there are time-independent sparse Hamiltonians that cannot be simulated via an oracle circuit of depth o(T). In the plain model, relying on the random oracle heuristic, we show that there exist time-independent local Hamiltonians and time-dependent geometrically local Hamiltonians on n qubits that cannot be simulated via an oracle circuit of depth o(T/n^c), where the Hamiltonians act on n qubits, and c is a constant. Lastly, we generalize the above results and show that any simulators that are geometrically local Hamiltonians cannot do the simulation much faster than parallel quantum algorithms.

Cite as

Nai-Hui Chia, Kai-Min Chung, Yao-Ching Hsieh, Han-Hsuan Lin, Yao-Ting Lin, and Yu-Ching Shen. On the Impossibility of General Parallel Fast-Forwarding of Hamiltonian Simulation. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 33:1-33:45, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{chia_et_al:LIPIcs.CCC.2023.33,
  author =	{Chia, Nai-Hui and Chung, Kai-Min and Hsieh, Yao-Ching and Lin, Han-Hsuan and Lin, Yao-Ting and Shen, Yu-Ching},
  title =	{{On the Impossibility of General Parallel Fast-Forwarding of Hamiltonian Simulation}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{33:1--33:45},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.33},
  URN =		{urn:nbn:de:0030-drops-183038},
  doi =		{10.4230/LIPIcs.CCC.2023.33},
  annote =	{Keywords: Hamiltonian simulation, Depth lower bound, Parallel query lower bound}
}
Document
A Hybrid Programming Language for Formal Modeling and Verification of Hybrid Systems

Authors: Eduard Kamburjan, Stefan Mitsch, and Reiner Hähnle

Published in: LITES, Volume 8, Issue 2 (2022): Special Issue on Distributed Hybrid Systems. Leibniz Transactions on Embedded Systems, Volume 8, Issue 2


Abstract
Designing and modeling complex cyber-physical systems (CPS) faces the double challenge of combined discrete-continuous dynamics and concurrent behavior. Existing formal modeling and verification languages for CPS expose the underlying proof search technology. They lack high-level structuring elements and are not efficiently executable. The ensuing modeling gap renders formal CPS models hard to understand and to validate. We propose a high-level programming-based approach to formal modeling and verification of hybrid systems as a hybrid extension of an Active Objects language. Well-structured hybrid active programs and requirements allow automatic, reachability-preserving translation into differential dynamic logic, a logic for hybrid (discrete-continuous) programs. Verification is achieved by discharging the resulting formulas with the theorem prover KeYmaera X. We demonstrate the usability of our approach with case studies.

Cite as

Eduard Kamburjan, Stefan Mitsch, and Reiner Hähnle. A Hybrid Programming Language for Formal Modeling and Verification of Hybrid Systems. In LITES, Volume 8, Issue 2 (2022): Special Issue on Distributed Hybrid Systems. Leibniz Transactions on Embedded Systems, Volume 8, Issue 2, pp. 04:1-04:34, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Article{kamburjan_et_al:LITES.8.2.4,
  author =	{Kamburjan, Eduard and Mitsch, Stefan and H\"{a}hnle, Reiner},
  title =	{{A Hybrid Programming Language for Formal Modeling and Verification of Hybrid Systems}},
  booktitle =	{LITES, Volume 8, Issue 2 (2022): Special Issue on Distributed Hybrid Systems},
  pages =	{04:1--04:34},
  journal =	{Leibniz Transactions on Embedded Systems},
  ISSN =	{2199-2002},
  year =	{2022},
  volume =	{8},
  number =	{2},
  editor =	{Kamburjan, Eduard and Mitsch, Stefan and H\"{a}hnle, Reiner},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES.8.2.4},
  doi =		{10.4230/LITES.8.2.4},
  annote =	{Keywords: Active Objects, Differential Dynamic Logic, Hybrid Systems}
}
Document
Susceptibility to Image Resolution in Face Recognition and Training Strategies to Enhance Robustness

Authors: Martin Knoche, Stefan Hörmann, and Gerhard Rigoll

Published in: LITES, Volume 8, Issue 1 (2022): Special Issue on Embedded Systems for Computer Vision. Leibniz Transactions on Embedded Systems, Volume 8, Issue 1


Abstract
Many face recognition approaches expect the input images to have similar image resolution. However, in real-world applications, the image resolution varies due to different image capture mechanisms or sources, affecting the performance of face recognition systems. This work first analyzes the image resolution susceptibility of modern face recognition. Face verification on the very popular LFW dataset drops from 99.23% accuracy to almost 55% when image dimensions of both images are reduced to arguable very poor resolution. With cross-resolution image pairs (one HR and one LR image), face verification accuracy is even worse. This characteristic is investigated more in-depth by analyzing the feature distances utilized for face verification. To increase the robustness, we propose two training strategies applied to a state-of-the-art face recognition model: 1) Training with 50% low resolution images within each batch and 2) using the cosine distance loss between high and low resolution features in a siamese network structure. Both methods significantly boost face verification accuracy for matching training and testing image resolutions. Training a network with different resolutions simultaneously instead of adding only one specific low resolution showed improvements across all resolutions and made a single model applicable to unknown resolutions. However, models trained for one particular low resolution perform better when using the exact resolution for testing. We improve the face verification accuracy from 96.86% to 97.72% on the popular LFW database with uniformly distributed image dimensions between 112 × 112 px and 5 × 5 px. Our approaches improve face verification accuracy even more from 77.56% to 87.17% for distributions focusing on lower images resolutions. Lastly, we propose specific image dimension sets focusing on high, mid, and low resolution for five well-known datasets to benchmark face verification accuracy in cross-resolution scenarios.

Cite as

Martin Knoche, Stefan Hörmann, and Gerhard Rigoll. Susceptibility to Image Resolution in Face Recognition and Training Strategies to Enhance Robustness. In LITES, Volume 8, Issue 1 (2022): Special Issue on Embedded Systems for Computer Vision. Leibniz Transactions on Embedded Systems, Volume 8, Issue 1, pp. 01:1-01:20, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Article{knoche_et_al:LITES.8.1.1,
  author =	{Knoche, Martin and H\"{o}rmann, Stefan and Rigoll, Gerhard},
  title =	{{Susceptibility to Image Resolution in Face Recognition and Training Strategies to Enhance Robustness}},
  booktitle =	{LITES, Volume 8, Issue 1 (2022): Special Issue on Embedded Systems for Computer Vision},
  pages =	{01:1--01:20},
  journal =	{Leibniz Transactions on Embedded Systems},
  ISSN =	{2199-2002},
  year =	{2022},
  volume =	{8},
  number =	{1},
  editor =	{Knoche, Martin and H\"{o}rmann, Stefan and Rigoll, Gerhard},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES.8.1.1},
  doi =		{10.4230/LITES.8.1.1},
  annote =	{Keywords: recognition, resolution, cross, face, identification}
}
Document
HW-Flow: A Multi-Abstraction Level HW-CNN Codesign Pruning Methodology

Authors: Manoj-Rohit Vemparala, Nael Fasfous, Alexander Frickenstein, Emanuele Valpreda, Manfredi Camalleri, Qi Zhao, Christian Unger, Naveen-Shankar Nagaraja, Maurizio Martina, and Walter Stechele

Published in: LITES, Volume 8, Issue 1 (2022): Special Issue on Embedded Systems for Computer Vision. Leibniz Transactions on Embedded Systems, Volume 8, Issue 1


Abstract
Convolutional neural networks (CNNs) have produced unprecedented accuracy for many computer vision problems in the recent past. In power and compute-constrained embedded platforms, deploying modern CNNs can present many challenges. Most CNN architectures do not run in real-time due to the high number of computational operations involved during the inference phase. This emphasizes the role of CNN optimization techniques in early design space exploration. To estimate their efficacy in satisfying the target constraints, existing techniques are either hardware (HW) agnostic, pseudo-HW-aware by considering parameter and operation counts, or HW-aware through inflexible hardware-in-the-loop (HIL) setups. In this work, we introduce HW-Flow, a framework for optimizing and exploring CNN models based on three levels of hardware abstraction: Coarse, Mid and Fine. Through these levels, CNN design and optimization can be iteratively refined towards efficient execution on the target hardware platform. We present HW-Flow in the context of CNN pruning by augmenting a reinforcement learning agent with key metrics to understand the influence of its pruning actions on the inference hardware. With 2× reduction in energy and latency, we prune ResNet56, ResNet50, and DeepLabv3 with minimal accuracy degradation on the CIFAR-10, ImageNet, and CityScapes datasets, respectively.

Cite as

Manoj-Rohit Vemparala, Nael Fasfous, Alexander Frickenstein, Emanuele Valpreda, Manfredi Camalleri, Qi Zhao, Christian Unger, Naveen-Shankar Nagaraja, Maurizio Martina, and Walter Stechele. HW-Flow: A Multi-Abstraction Level HW-CNN Codesign Pruning Methodology. In LITES, Volume 8, Issue 1 (2022): Special Issue on Embedded Systems for Computer Vision. Leibniz Transactions on Embedded Systems, Volume 8, Issue 1, pp. 03:1-03:30, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Article{vemparala_et_al:LITES.8.1.3,
  author =	{Vemparala, Manoj-Rohit and Fasfous, Nael and Frickenstein, Alexander and Valpreda, Emanuele and Camalleri, Manfredi and Zhao, Qi and Unger, Christian and Nagaraja, Naveen-Shankar and Martina, Maurizio and Stechele, Walter},
  title =	{{HW-Flow: A Multi-Abstraction Level HW-CNN Codesign Pruning Methodology}},
  booktitle =	{LITES, Volume 8, Issue 1 (2022): Special Issue on Embedded Systems for Computer Vision},
  pages =	{03:1--03:30},
  journal =	{Leibniz Transactions on Embedded Systems},
  ISSN =	{2199-2002},
  year =	{2022},
  volume =	{8},
  number =	{1},
  editor =	{Vemparala, Manoj-Rohit and Fasfous, Nael and Frickenstein, Alexander and Valpreda, Emanuele and Camalleri, Manfredi and Zhao, Qi and Unger, Christian and Nagaraja, Naveen-Shankar and Martina, Maurizio and Stechele, Walter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES.8.1.3},
  doi =		{10.4230/LITES.8.1.3},
  annote =	{Keywords: Convolutional Neural Networks, Optimization, Hardware Modeling, Pruning}
}
Document
Feature Cross Search via Submodular Optimization

Authors: Lin Chen, Hossein Esfandiari, Gang Fu, Vahab S. Mirrokni, and Qian Yu

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
In this paper, we study feature cross search as a fundamental primitive in feature engineering. The importance of feature cross search especially for the linear model has been known for a while, with well-known textbook examples. In this problem, the goal is to select a small subset of features, combine them to form a new feature (called the crossed feature) by considering their Cartesian product, and find feature crosses to learn an accurate model. In particular, we study the problem of maximizing a normalized Area Under the Curve (AUC) of the linear model trained on the crossed feature column. First, we show that it is not possible to provide an n^{1/log log n}-approximation algorithm for this problem unless the exponential time hypothesis fails. This result also rules out the possibility of solving this problem in polynomial time unless 𝖯 = NP. On the positive side, by assuming the naïve Bayes assumption, we show that there exists a simple greedy (1-1/e)-approximation algorithm for this problem. This result is established by relating the AUC to the total variation of the commutator of two probability measures and showing that the total variation of the commutator is monotone and submodular. To show this, we relate the submodularity of this function to the positive semi-definiteness of a corresponding kernel matrix. Then, we use Bochner’s theorem to prove the positive semi-definiteness by showing that its inverse Fourier transform is non-negative everywhere. Our techniques and structural results might be of independent interest.

Cite as

Lin Chen, Hossein Esfandiari, Gang Fu, Vahab S. Mirrokni, and Qian Yu. Feature Cross Search via Submodular Optimization. In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 31:1-31:16, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ESA.2021.31,
  author =	{Chen, Lin and Esfandiari, Hossein and Fu, Gang and Mirrokni, Vahab S. and Yu, Qian},
  title =	{{Feature Cross Search via Submodular Optimization}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{31:1--31:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.31},
  URN =		{urn:nbn:de:0030-drops-146124},
  doi =		{10.4230/LIPIcs.ESA.2021.31},
  annote =	{Keywords: Feature engineering, feature cross, submodularity}
}
Document
LRBinner: Binning Long Reads in Metagenomics Datasets

Authors: Anuradha Wickramarachchi and Yu Lin

Published in: LIPIcs, Volume 201, 21st International Workshop on Algorithms in Bioinformatics (WABI 2021)


Abstract
Advancements in metagenomics sequencing allow the study of microbial communities directly from their environments. Metagenomics binning is a key step in the species characterisation of microbial communities. Next-generation sequencing reads are usually assembled into contigs for metagenomics binning mainly due to the limited information within short reads. Third-generation sequencing provides much longer reads that have lengths similar to the contigs assembled from short reads. However, existing contig-binning tools cannot be directly applied on long reads due to the absence of coverage information and the presence of high error rates. The few existing long-read binning tools either use only composition or use composition and coverage information separately. This may ignore bins that correspond to low-abundance species or erroneously split bins that correspond to species with non-uniform coverages. Here we present a reference-free binning approach, LRBinner, that combines composition and coverage information of complete long-read datasets. LRBinner also uses a distance-histogram-based clustering algorithm to extract clusters with varying sizes. The experimental results on both simulated and real datasets show that LRBinner achieves the best binning accuracy against the baselines. Moreover, we show that binning reads using LRBinner prior to assembly reduces computational resources for assembly while attaining satisfactory assembly qualities.

Cite as

Anuradha Wickramarachchi and Yu Lin. LRBinner: Binning Long Reads in Metagenomics Datasets. In 21st International Workshop on Algorithms in Bioinformatics (WABI 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 201, pp. 11:1-11:18, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{wickramarachchi_et_al:LIPIcs.WABI.2021.11,
  author =	{Wickramarachchi, Anuradha and Lin, Yu},
  title =	{{LRBinner: Binning Long Reads in Metagenomics Datasets}},
  booktitle =	{21st International Workshop on Algorithms in Bioinformatics (WABI 2021)},
  pages =	{11:1--11:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-200-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{201},
  editor =	{Carbone, Alessandra and El-Kebir, Mohammed},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2021.11},
  URN =		{urn:nbn:de:0030-drops-143644},
  doi =		{10.4230/LIPIcs.WABI.2021.11},
  annote =	{Keywords: Metagenomics binning, long reads, machine learning, clustering}
}
Document
GraphBin2: Refined and Overlapped Binning of Metagenomic Contigs Using Assembly Graphs

Authors: Vijini G. Mallawaarachchi, Anuradha S. Wickramarachchi, and Yu Lin

Published in: LIPIcs, Volume 172, 20th International Workshop on Algorithms in Bioinformatics (WABI 2020)


Abstract
Metagenomic sequencing allows us to study structure, diversity and ecology in microbial communities without the necessity of obtaining pure cultures. In many metagenomics studies, the reads obtained from metagenomics sequencing are first assembled into longer contigs and these contigs are then binned into clusters of contigs where contigs in a cluster are expected to come from the same species. As different species may share common sequences in their genomes, one assembled contig may belong to multiple species. However, existing tools for contig binning only support non-overlapped binning, i.e., each contig is assigned to at most one bin (species). In this paper, we introduce GraphBin2 which refines the binning results obtained from existing tools and, more importantly, is able to assign contigs to multiple bins. GraphBin2 uses the connectivity and coverage information from assembly graphs to adjust existing binning results on contigs and to infer contigs shared by multiple species. Experimental results on both simulated and real datasets demonstrate that GraphBin2 not only improves binning results of existing tools but also supports to assign contigs to multiple bins.

Cite as

Vijini G. Mallawaarachchi, Anuradha S. Wickramarachchi, and Yu Lin. GraphBin2: Refined and Overlapped Binning of Metagenomic Contigs Using Assembly Graphs. In 20th International Workshop on Algorithms in Bioinformatics (WABI 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 172, pp. 8:1-8:21, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{mallawaarachchi_et_al:LIPIcs.WABI.2020.8,
  author =	{Mallawaarachchi, Vijini G. and Wickramarachchi, Anuradha S. and Lin, Yu},
  title =	{{GraphBin2: Refined and Overlapped Binning of Metagenomic Contigs Using Assembly Graphs}},
  booktitle =	{20th International Workshop on Algorithms in Bioinformatics (WABI 2020)},
  pages =	{8:1--8:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-161-0},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{172},
  editor =	{Kingsford, Carl and Pisanti, Nadia},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2020.8},
  URN =		{urn:nbn:de:0030-drops-127974},
  doi =		{10.4230/LIPIcs.WABI.2020.8},
  annote =	{Keywords: Metagenomics binning, contigs, assembly graphs, overlapped binning}
}
Document
Symbolic Execution Game Semantics

Authors: Yu-Yang Lin and Nikos Tzevelekos

Published in: LIPIcs, Volume 167, 5th International Conference on Formal Structures for Computation and Deduction (FSCD 2020)


Abstract
We present a framework for symbolically executing and model checking higher-order programs with external (open) methods. We focus on the client-library paradigm and in particular we aim to check libraries with respect to any definable client. We combine traditional symbolic execution techniques with operational game semantics to build a symbolic execution semantics that captures arbitrary external behaviour. We prove the symbolic semantics to be sound and complete. This yields a bounded technique by imposing bounds on the depth of recursion and callbacks. We provide an implementation of our technique in the 𝕂 framework and showcase its performance on a custom benchmark based on higher-order coding errors such as reentrancy bugs.

Cite as

Yu-Yang Lin and Nikos Tzevelekos. Symbolic Execution Game Semantics. In 5th International Conference on Formal Structures for Computation and Deduction (FSCD 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 167, pp. 27:1-27:24, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{lin_et_al:LIPIcs.FSCD.2020.27,
  author =	{Lin, Yu-Yang and Tzevelekos, Nikos},
  title =	{{Symbolic Execution Game Semantics}},
  booktitle =	{5th International Conference on Formal Structures for Computation and Deduction (FSCD 2020)},
  pages =	{27:1--27:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-155-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{167},
  editor =	{Ariola, Zena M.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2020.27},
  URN =		{urn:nbn:de:0030-drops-123493},
  doi =		{10.4230/LIPIcs.FSCD.2020.27},
  annote =	{Keywords: game semantics, symbolic execution, higher-order open programs}
}
Document
Track A: Algorithms, Complexity and Games
Quantum SDP Solvers: Large Speed-Ups, Optimality, and Applications to Quantum Learning

Authors: Fernando G. S. L. Brandão, Amir Kalev, Tongyang Li, Cedric Yen-Yu Lin, Krysta M. Svore, and Xiaodi Wu

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
We give two new quantum algorithms for solving semidefinite programs (SDPs) providing quantum speed-ups. We consider SDP instances with m constraint matrices, each of dimension n, rank at most r, and sparsity s. The first algorithm assumes an input model where one is given access to an oracle to the entries of the matrices at unit cost. We show that it has run time O~(s^2 (sqrt{m} epsilon^{-10} + sqrt{n} epsilon^{-12})), with epsilon the error of the solution. This gives an optimal dependence in terms of m, n and quadratic improvement over previous quantum algorithms (when m ~~ n). The second algorithm assumes a fully quantum input model in which the input matrices are given as quantum states. We show that its run time is O~(sqrt{m}+poly(r))*poly(log m,log n,B,epsilon^{-1}), with B an upper bound on the trace-norm of all input matrices. In particular the complexity depends only polylogarithmically in n and polynomially in r. We apply the second SDP solver to learn a good description of a quantum state with respect to a set of measurements: Given m measurements and a supply of copies of an unknown state rho with rank at most r, we show we can find in time sqrt{m}*poly(log m,log n,r,epsilon^{-1}) a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as rho on the m measurements, up to error epsilon. The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes' principle from statistical mechanics. As in previous work, we obtain our algorithm by "quantizing" classical SDP solvers based on the matrix multiplicative weight update method. One of our main technical contributions is a quantum Gibbs state sampler for low-rank Hamiltonians, given quantum states encoding these Hamiltonians, with a poly-logarithmic dependence on its dimension, which is based on ideas developed in quantum principal component analysis. We also develop a "fast" quantum OR lemma with a quadratic improvement in gate complexity over the construction of Harrow et al. [Harrow et al., 2017]. We believe both techniques might be of independent interest.

Cite as

Fernando G. S. L. Brandão, Amir Kalev, Tongyang Li, Cedric Yen-Yu Lin, Krysta M. Svore, and Xiaodi Wu. Quantum SDP Solvers: Large Speed-Ups, Optimality, and Applications to Quantum Learning. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 27:1-27:14, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{brandao_et_al:LIPIcs.ICALP.2019.27,
  author =	{Brand\~{a}o, Fernando G. S. L. and Kalev, Amir and Li, Tongyang and Lin, Cedric Yen-Yu and Svore, Krysta M. and Wu, Xiaodi},
  title =	{{Quantum SDP Solvers: Large Speed-Ups, Optimality, and Applications to Quantum Learning}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{27:1--27:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.27},
  URN =		{urn:nbn:de:0030-drops-106036},
  doi =		{10.4230/LIPIcs.ICALP.2019.27},
  annote =	{Keywords: quantum algorithms, semidefinite program, convex optimization}
}
Document
A Complete Characterization of Unitary Quantum Space

Authors: Bill Fefferman and Cedric Yen-Yu Lin

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
Motivated by understanding the power of quantum computation with restricted number of qubits, we give two complete characterizations of unitary quantum space bounded computation. First we show that approximating an element of the inverse of a well-conditioned efficiently encoded 2^k(n) x 2^k(n) matrix is complete for the class of problems solvable by quantum circuits acting on O(k(n)) qubits with all measurements at the end of the computation. Similarly, estimating the minimum eigenvalue of an efficiently encoded Hermitian 2^k(n) x 2^k(n) matrix is also complete for this class. In the logspace case, our results improve on previous results of Ta-Shma by giving new space-efficient quantum algorithms that avoid intermediate measurements, as well as showing matching hardness results. Additionally, as a consequence we show that preciseQMA, the version of QMA with exponentially small completeness-soundess gap, is equal to PSPACE. Thus, the problem of estimating the minimum eigenvalue of a local Hamiltonian to inverse exponential precision is PSPACE-complete, which we show holds even in the frustration-free case. Finally, we can use this characterization to give a provable setting in which the ability to prepare the ground state of a local Hamiltonian is more powerful than the ability to prepare PEPS states. Interestingly, by suitably changing the parameterization of either of these problems we can completely characterize the power of quantum computation with simultaneously bounded time and space.

Cite as

Bill Fefferman and Cedric Yen-Yu Lin. A Complete Characterization of Unitary Quantum Space. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 4:1-4:21, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{fefferman_et_al:LIPIcs.ITCS.2018.4,
  author =	{Fefferman, Bill and Lin, Cedric Yen-Yu},
  title =	{{A Complete Characterization of Unitary Quantum Space}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{4:1--4:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.4},
  URN =		{urn:nbn:de:0030-drops-83242},
  doi =		{10.4230/LIPIcs.ITCS.2018.4},
  annote =	{Keywords: Quantum complexity, space complexity, complete problems, QMA}
}
Document
The (1|1)-Centroid Problem on the Plane Concerning Distance Constraints

Authors: Hung-I Yu, Tien-Ching Lin, and Der-Tsai Lee

Published in: LIPIcs, Volume 64, 27th International Symposium on Algorithms and Computation (ISAAC 2016)


Abstract
In 1982, Drezner proposed the (1|1)-centroid problem on the plane, in which two players, called the leader and the follower, open facilities to provide service to customers in a competitive manner. The leader opens the first facility, and then the follower opens the second. Each customer will patronize the facility closest to him (ties broken in favor of the leader's one), thereby decides the market share of the two players. The goal is to find the best position for the leader’s facility so that his market share is maximized. The best algorithm for this problem is an O(n^2 log n)-time parametric search approach, which searches over the space of possible market share values. In the same paper, Drezner also proposed a general version of (1|1)-centroid problem by introducing a minimal distance constraint R, such that the follower's facility is not allowed to be located within a distance R from the leader's. He proposed an O(n^5 log n)-time algorithm for this general version by identifying O(n^4) points as the candidates of the optimal solution and checking the market share for each of them. In this paper, we develop a new parametric search approach searching over the O(n^4) candidate points, and present an O(n^2 log n)-time algorithm for the general version, thereby closing the O(n^3) gap between the two bounds.

Cite as

Hung-I Yu, Tien-Ching Lin, and Der-Tsai Lee. The (1|1)-Centroid Problem on the Plane Concerning Distance Constraints. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 64:1-64:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{yu_et_al:LIPIcs.ISAAC.2016.64,
  author =	{Yu, Hung-I and Lin, Tien-Ching and Lee, Der-Tsai},
  title =	{{The (1|1)-Centroid Problem on the Plane Concerning Distance Constraints}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{64:1--64:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.64},
  URN =		{urn:nbn:de:0030-drops-68337},
  doi =		{10.4230/LIPIcs.ISAAC.2016.64},
  annote =	{Keywords: competitive facility, Euclidean plane, parametric search}
}
Document
Space-Efficient Error Reduction for Unitary Quantum Computations

Authors: Bill Fefferman, Hirotada Kobayashi, Cedric Yen-Yu Lin, Tomoyuki Morimae, and Harumichi Nishimura

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
This paper presents a general space-efficient method for error reduction for unitary quantum computation. Consider a polynomial-time quantum computation with completeness c and soundness s, either with or without a witness (corresponding to QMA and BQP, respectively). To convert this computation into a new computation with error at most 2^{-p}, the most space-efficient method known requires extra workspace of O(p*log(1/(c-s))) qubits. This space requirement is too large for scenarios like logarithmic-space quantum computations. This paper shows an errorreduction method for unitary quantum computations (i.e., computations without intermediate measurements) that requires extra workspace of just O(log(p/(c-s))) qubits. This in particular gives the first method of strong amplification for logarithmic-space unitary quantum computations with two-sided bounded error. This also leads to a number of consequences in complexity theory, such as the uselessness of quantum witnesses in bounded-error logarithmic-space unitary quantum computations, the PSPACE upper bound for QMA with exponentially-small completeness-soundness gap, and strong amplification for matchgate computations.

Cite as

Bill Fefferman, Hirotada Kobayashi, Cedric Yen-Yu Lin, Tomoyuki Morimae, and Harumichi Nishimura. Space-Efficient Error Reduction for Unitary Quantum Computations. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 14:1-14:14, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fefferman_et_al:LIPIcs.ICALP.2016.14,
  author =	{Fefferman, Bill and Kobayashi, Hirotada and Yen-Yu Lin, Cedric and Morimae, Tomoyuki and Nishimura, Harumichi},
  title =	{{Space-Efficient Error Reduction for Unitary Quantum Computations}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{14:1--14:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.14},
  URN =		{urn:nbn:de:0030-drops-62975},
  doi =		{10.4230/LIPIcs.ICALP.2016.14},
  annote =	{Keywords: space-bounded computation, quantum Merlin-Arthur proof systems, error reduction, quantum computing}
}
Document
Programming Language Constructs Supporting Fault Tolerance

Authors: Christina Houben and Sebastian Houben

Published in: LITES, Volume 3, Issue 1 (2016). Leibniz Transactions on Embedded Systems, Volume 3, Issue 1


Abstract
In order to render software viable for highly safety-critical applications, we describe how to incorporate fault tolerance mechanisms into the real-time programming language PEARL. Therefore, we present, classify, evaluate and illustrate known fault tolerance methods for software. We link them together with the requirements of the international standard IEC 61508-3 for functional safety. We contribute PEARL-2020 programming language constructs for fault tolerance methods that need to be implemented by operating systems, and code-snippets as well as libraries for those independent from runtime systems.

Cite as

Christina Houben and Sebastian Houben. Programming Language Constructs Supporting Fault Tolerance. In LITES, Volume 3, Issue 1 (2016). Leibniz Transactions on Embedded Systems, Volume 3, Issue 1, pp. 01:1-01:20, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Article{houben_et_al:LITES-v003-i001-a001,
  author =	{Houben, Christina and Houben, Sebastian},
  title =	{{Programming Language Constructs Supporting Fault Tolerance}},
  booktitle =	{LITES, Volume 3, Issue 1 (2016)},
  pages =	{01:1--01:20},
  journal =	{Leibniz Transactions on Embedded Systems},
  ISSN =	{2199-2002},
  year =	{2016},
  volume =	{3},
  number =	{1},
  editor =	{Houben, Christina and Houben, Sebastian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES-v003-i001-a001},
  doi =		{10.4230/LITES-v003-i001-a001},
  annote =	{Keywords: Fault tolerance, Functional safety, PEARL, Embedded systems, Software engineering}
}
Document
Oracles with Costs

Authors: Shelby Kimmel, Cedric Yen-Yu Lin, and Han-Hsuan Lin

Published in: LIPIcs, Volume 44, 10th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2015)


Abstract
While powerful tools have been developed to analyze quantum query complexity, there are still many natural problems that do not fit neatly into the black box model of oracles. We create a new model that allows multiple oracles with differing costs. This model captures more of the difficulty of certain natural problems. We test this model on a simple problem, Search with Two Oracles, for which we create a quantum algorithm that we prove is asymptotically optimal. We further give some evidence, using a geometric picture of Grover's algorithm, that our algorithm is exactly optimal.

Cite as

Shelby Kimmel, Cedric Yen-Yu Lin, and Han-Hsuan Lin. Oracles with Costs. In 10th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 44, pp. 1-26, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{kimmel_et_al:LIPIcs.TQC.2015.1,
  author =	{Kimmel, Shelby and Lin, Cedric Yen-Yu and Lin, Han-Hsuan},
  title =	{{Oracles with Costs}},
  booktitle =	{10th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2015)},
  pages =	{1--26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-96-5},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{44},
  editor =	{Beigi, Salman and K\"{o}nig, Robert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2015.1},
  URN =		{urn:nbn:de:0030-drops-55459},
  doi =		{10.4230/LIPIcs.TQC.2015.1},
  annote =	{Keywords: Quantum Algorithms, Query Complexity, Amplitude Amplification}
}
  • Refine by Author
  • 4 Lin, Cedric Yen-Yu
  • 3 Lin, Han-Hsuan
  • 2 Fefferman, Bill
  • 2 Lin, Yu
  • 1 Brandão, Fernando G. S. L.
  • Show More...

  • Refine by Classification
  • 2 Applied computing → Bioinformatics
  • 2 Applied computing → Computational genomics
  • 2 Theory of computation → Quantum complexity theory
  • 1 Computing methodologies → Artificial intelligence
  • 1 Computing methodologies → Distributed programming languages
  • Show More...

  • Refine by Keyword
  • 2 Metagenomics binning
  • 2 Quantum Algorithms
  • 2 Query Complexity
  • 2 quantum computing
  • 1 Active Objects
  • Show More...

  • Refine by Type
  • 16 document

  • Refine by Publication Year
  • 3 2016
  • 3 2022
  • 2 2015
  • 2 2020
  • 2 2021
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail