18 Search Results for "Lin, Yu-Yang"


Document
Tight Bounds for Quantum Phase Estimation and Related Problems

Authors: Nikhil S. Mande and Ronald de Wolf

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
Phase estimation, due to Kitaev [arXiv'95], is one of the most fundamental subroutines in quantum computing, used in Shor’s factoring algorithm, optimization algorithms, quantum chemistry algorithms, and many others. In the basic scenario, one is given black-box access to a unitary U, and an eigenstate |ψ⟩ of U with unknown eigenvalue e^{iθ}, and the task is to estimate the eigenphase θ within ±δ, with high probability. The repeated application of U and U^{-1} is typically the most expensive part of phase estimation, so for us the cost of an algorithm will be that number of applications. Motivated by the "guided Hamiltonian problem" in quantum chemistry, we tightly characterize the cost of several variants of phase estimation where we are no longer given an arbitrary eigenstate, but are required to estimate the maximum eigenphase of U, aided by advice in the form of states (or a unitary preparing those states) which are promised to have at least a certain overlap γ with the top eigenspace. We give algorithms and matching lower bounds (up to logarithmic factors) for all ranges of parameters. We show a crossover point below which advice is not helpful: o(1/γ²) copies of the advice state (or o(1/γ) applications of an advice-preparing unitary) are not significantly better than having no advice at all. We also show that having knowledge of the eigenbasis of U does not significantly reduce cost. Our upper bounds use the subroutine of generalized maximum-finding of van Apeldoorn, Gilyén, Gribling, and de Wolf [Quantum'20], the state-based Hamiltonian simulation of Lloyd, Mohseni, and Rebentrost [Nature Physics'13], and several other techniques. Our lower bounds follow by reductions from a fractional version of the Boolean OR function with advice, which we lower bound by a simple modification of the adversary method of Ambainis [JCSS'02]. As an immediate consequence we also obtain a lower bound on the complexity of the Unitary recurrence time problem, matching an upper bound of She and Yuen [ITCS'23] and resolving an open question posed by them. Lastly, we study how efficiently one can reduce the error probability in the basic phase-estimation scenario. We show that an algorithm solving phase estimation to precision δ with error probability at most ε must have cost Ω(1/δ log(1/ε)), matching the obvious way to error-reduce the basic constant-error-probability phase estimation algorithm. This contrasts with some other scenarios in quantum computing (e.g. search) where error-reduction costs only a factor O(√{log(1/ε)}). Our lower bound technique uses a variant of the polynomial method with trigonometric polynomials.

Cite as

Nikhil S. Mande and Ronald de Wolf. Tight Bounds for Quantum Phase Estimation and Related Problems. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 81:1-81:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{mande_et_al:LIPIcs.ESA.2023.81,
  author =	{Mande, Nikhil S. and de Wolf, Ronald},
  title =	{{Tight Bounds for Quantum Phase Estimation and Related Problems}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{81:1--81:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.81},
  URN =		{urn:nbn:de:0030-drops-187346},
  doi =		{10.4230/LIPIcs.ESA.2023.81},
  annote =	{Keywords: Phase estimation, quantum computing}
}
Document
On the Impossibility of General Parallel Fast-Forwarding of Hamiltonian Simulation

Authors: Nai-Hui Chia, Kai-Min Chung, Yao-Ching Hsieh, Han-Hsuan Lin, Yao-Ting Lin, and Yu-Ching Shen

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
Hamiltonian simulation is one of the most important problems in the field of quantum computing. There have been extended efforts on designing algorithms for faster simulation, and the evolution time T for the simulation greatly affect algorithm runtime as expected. While there are some specific types of Hamiltonians that can be fast-forwarded, i.e., simulated within time o(T), for some large classes of Hamiltonians (e.g., all local/sparse Hamiltonians), existing simulation algorithms require running time at least linear in the evolution time T. On the other hand, while there exist lower bounds of Ω(T) circuit size for some large classes of Hamiltonian, these lower bounds do not rule out the possibilities of Hamiltonian simulation with large but "low-depth" circuits by running things in parallel. As a result, physical systems with system size scaling with T can potentially do a fast-forwarding simulation. Therefore, it is intriguing whether we can achieve fast Hamiltonian simulation with the power of parallelism. In this work, we give a negative result for the above open problem in various settings. In the oracle model, we prove that there are time-independent sparse Hamiltonians that cannot be simulated via an oracle circuit of depth o(T). In the plain model, relying on the random oracle heuristic, we show that there exist time-independent local Hamiltonians and time-dependent geometrically local Hamiltonians on n qubits that cannot be simulated via an oracle circuit of depth o(T/n^c), where the Hamiltonians act on n qubits, and c is a constant. Lastly, we generalize the above results and show that any simulators that are geometrically local Hamiltonians cannot do the simulation much faster than parallel quantum algorithms.

Cite as

Nai-Hui Chia, Kai-Min Chung, Yao-Ching Hsieh, Han-Hsuan Lin, Yao-Ting Lin, and Yu-Ching Shen. On the Impossibility of General Parallel Fast-Forwarding of Hamiltonian Simulation. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 33:1-33:45, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{chia_et_al:LIPIcs.CCC.2023.33,
  author =	{Chia, Nai-Hui and Chung, Kai-Min and Hsieh, Yao-Ching and Lin, Han-Hsuan and Lin, Yao-Ting and Shen, Yu-Ching},
  title =	{{On the Impossibility of General Parallel Fast-Forwarding of Hamiltonian Simulation}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{33:1--33:45},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.33},
  URN =		{urn:nbn:de:0030-drops-183038},
  doi =		{10.4230/LIPIcs.CCC.2023.33},
  annote =	{Keywords: Hamiltonian simulation, Depth lower bound, Parallel query lower bound}
}
Document
Extended Abstract
Detecting and Quantifying Crypto Wash Trading (Extended Abstract)

Authors: Lin William Cong, Xi Li, Ke Tang, and Yang Yang

Published in: OASIcs, Volume 97, 3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021)


Abstract
We introduce systematic tests exploiting robust statistical and behavioral patterns in trading to detect fake transactions on 29 cryptocurrency exchanges. Regulated exchanges feature patterns consistently observed in financial markets and nature; abnormal first-significant-digit distributions, size rounding, and transaction tail distributions on unregulated exchanges reveal rampant manipulations unlikely driven by strategy or exchange heterogeneity. We quantify the wash trading on each unregulated exchange, which averaged over 70% of the reported volume. We further document how these fabricated volumes (trillions of dollars annually) improve exchange ranking, temporarily distort prices, and relate to exchange characteristics (e.g., age and userbase), market conditions, and regulation.

Cite as

Lin William Cong, Xi Li, Ke Tang, and Yang Yang. Detecting and Quantifying Crypto Wash Trading (Extended Abstract). In 3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021). Open Access Series in Informatics (OASIcs), Volume 97, pp. 10:1-10:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{cong_et_al:OASIcs.Tokenomics.2021.10,
  author =	{Cong, Lin William and Li, Xi and Tang, Ke and Yang, Yang},
  title =	{{Detecting and Quantifying Crypto Wash Trading}},
  booktitle =	{3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021)},
  pages =	{10:1--10:6},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-220-4},
  ISSN =	{2190-6807},
  year =	{2022},
  volume =	{97},
  editor =	{Gramoli, Vincent and Halaburda, Hanna and Pass, Rafael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.Tokenomics.2021.10},
  URN =		{urn:nbn:de:0030-drops-159072},
  doi =		{10.4230/OASIcs.Tokenomics.2021.10},
  annote =	{Keywords: Bitcoin, Cryptocurrency, FinTech, Forensic Finance, Fraud Detection, Regulation}
}
Document
Feature Cross Search via Submodular Optimization

Authors: Lin Chen, Hossein Esfandiari, Gang Fu, Vahab S. Mirrokni, and Qian Yu

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
In this paper, we study feature cross search as a fundamental primitive in feature engineering. The importance of feature cross search especially for the linear model has been known for a while, with well-known textbook examples. In this problem, the goal is to select a small subset of features, combine them to form a new feature (called the crossed feature) by considering their Cartesian product, and find feature crosses to learn an accurate model. In particular, we study the problem of maximizing a normalized Area Under the Curve (AUC) of the linear model trained on the crossed feature column. First, we show that it is not possible to provide an n^{1/log log n}-approximation algorithm for this problem unless the exponential time hypothesis fails. This result also rules out the possibility of solving this problem in polynomial time unless 𝖯 = NP. On the positive side, by assuming the naïve Bayes assumption, we show that there exists a simple greedy (1-1/e)-approximation algorithm for this problem. This result is established by relating the AUC to the total variation of the commutator of two probability measures and showing that the total variation of the commutator is monotone and submodular. To show this, we relate the submodularity of this function to the positive semi-definiteness of a corresponding kernel matrix. Then, we use Bochner’s theorem to prove the positive semi-definiteness by showing that its inverse Fourier transform is non-negative everywhere. Our techniques and structural results might be of independent interest.

Cite as

Lin Chen, Hossein Esfandiari, Gang Fu, Vahab S. Mirrokni, and Qian Yu. Feature Cross Search via Submodular Optimization. In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 31:1-31:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ESA.2021.31,
  author =	{Chen, Lin and Esfandiari, Hossein and Fu, Gang and Mirrokni, Vahab S. and Yu, Qian},
  title =	{{Feature Cross Search via Submodular Optimization}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{31:1--31:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.31},
  URN =		{urn:nbn:de:0030-drops-146124},
  doi =		{10.4230/LIPIcs.ESA.2021.31},
  annote =	{Keywords: Feature engineering, feature cross, submodularity}
}
Document
LRBinner: Binning Long Reads in Metagenomics Datasets

Authors: Anuradha Wickramarachchi and Yu Lin

Published in: LIPIcs, Volume 201, 21st International Workshop on Algorithms in Bioinformatics (WABI 2021)


Abstract
Advancements in metagenomics sequencing allow the study of microbial communities directly from their environments. Metagenomics binning is a key step in the species characterisation of microbial communities. Next-generation sequencing reads are usually assembled into contigs for metagenomics binning mainly due to the limited information within short reads. Third-generation sequencing provides much longer reads that have lengths similar to the contigs assembled from short reads. However, existing contig-binning tools cannot be directly applied on long reads due to the absence of coverage information and the presence of high error rates. The few existing long-read binning tools either use only composition or use composition and coverage information separately. This may ignore bins that correspond to low-abundance species or erroneously split bins that correspond to species with non-uniform coverages. Here we present a reference-free binning approach, LRBinner, that combines composition and coverage information of complete long-read datasets. LRBinner also uses a distance-histogram-based clustering algorithm to extract clusters with varying sizes. The experimental results on both simulated and real datasets show that LRBinner achieves the best binning accuracy against the baselines. Moreover, we show that binning reads using LRBinner prior to assembly reduces computational resources for assembly while attaining satisfactory assembly qualities.

Cite as

Anuradha Wickramarachchi and Yu Lin. LRBinner: Binning Long Reads in Metagenomics Datasets. In 21st International Workshop on Algorithms in Bioinformatics (WABI 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 201, pp. 11:1-11:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{wickramarachchi_et_al:LIPIcs.WABI.2021.11,
  author =	{Wickramarachchi, Anuradha and Lin, Yu},
  title =	{{LRBinner: Binning Long Reads in Metagenomics Datasets}},
  booktitle =	{21st International Workshop on Algorithms in Bioinformatics (WABI 2021)},
  pages =	{11:1--11:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-200-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{201},
  editor =	{Carbone, Alessandra and El-Kebir, Mohammed},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2021.11},
  URN =		{urn:nbn:de:0030-drops-143644},
  doi =		{10.4230/LIPIcs.WABI.2021.11},
  annote =	{Keywords: Metagenomics binning, long reads, machine learning, clustering}
}
Document
GraphBin2: Refined and Overlapped Binning of Metagenomic Contigs Using Assembly Graphs

Authors: Vijini G. Mallawaarachchi, Anuradha S. Wickramarachchi, and Yu Lin

Published in: LIPIcs, Volume 172, 20th International Workshop on Algorithms in Bioinformatics (WABI 2020)


Abstract
Metagenomic sequencing allows us to study structure, diversity and ecology in microbial communities without the necessity of obtaining pure cultures. In many metagenomics studies, the reads obtained from metagenomics sequencing are first assembled into longer contigs and these contigs are then binned into clusters of contigs where contigs in a cluster are expected to come from the same species. As different species may share common sequences in their genomes, one assembled contig may belong to multiple species. However, existing tools for contig binning only support non-overlapped binning, i.e., each contig is assigned to at most one bin (species). In this paper, we introduce GraphBin2 which refines the binning results obtained from existing tools and, more importantly, is able to assign contigs to multiple bins. GraphBin2 uses the connectivity and coverage information from assembly graphs to adjust existing binning results on contigs and to infer contigs shared by multiple species. Experimental results on both simulated and real datasets demonstrate that GraphBin2 not only improves binning results of existing tools but also supports to assign contigs to multiple bins.

Cite as

Vijini G. Mallawaarachchi, Anuradha S. Wickramarachchi, and Yu Lin. GraphBin2: Refined and Overlapped Binning of Metagenomic Contigs Using Assembly Graphs. In 20th International Workshop on Algorithms in Bioinformatics (WABI 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 172, pp. 8:1-8:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{mallawaarachchi_et_al:LIPIcs.WABI.2020.8,
  author =	{Mallawaarachchi, Vijini G. and Wickramarachchi, Anuradha S. and Lin, Yu},
  title =	{{GraphBin2: Refined and Overlapped Binning of Metagenomic Contigs Using Assembly Graphs}},
  booktitle =	{20th International Workshop on Algorithms in Bioinformatics (WABI 2020)},
  pages =	{8:1--8:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-161-0},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{172},
  editor =	{Kingsford, Carl and Pisanti, Nadia},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2020.8},
  URN =		{urn:nbn:de:0030-drops-127974},
  doi =		{10.4230/LIPIcs.WABI.2020.8},
  annote =	{Keywords: Metagenomics binning, contigs, assembly graphs, overlapped binning}
}
Document
Symbolic Execution Game Semantics

Authors: Yu-Yang Lin and Nikos Tzevelekos

Published in: LIPIcs, Volume 167, 5th International Conference on Formal Structures for Computation and Deduction (FSCD 2020)


Abstract
We present a framework for symbolically executing and model checking higher-order programs with external (open) methods. We focus on the client-library paradigm and in particular we aim to check libraries with respect to any definable client. We combine traditional symbolic execution techniques with operational game semantics to build a symbolic execution semantics that captures arbitrary external behaviour. We prove the symbolic semantics to be sound and complete. This yields a bounded technique by imposing bounds on the depth of recursion and callbacks. We provide an implementation of our technique in the 𝕂 framework and showcase its performance on a custom benchmark based on higher-order coding errors such as reentrancy bugs.

Cite as

Yu-Yang Lin and Nikos Tzevelekos. Symbolic Execution Game Semantics. In 5th International Conference on Formal Structures for Computation and Deduction (FSCD 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 167, pp. 27:1-27:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{lin_et_al:LIPIcs.FSCD.2020.27,
  author =	{Lin, Yu-Yang and Tzevelekos, Nikos},
  title =	{{Symbolic Execution Game Semantics}},
  booktitle =	{5th International Conference on Formal Structures for Computation and Deduction (FSCD 2020)},
  pages =	{27:1--27:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-155-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{167},
  editor =	{Ariola, Zena M.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2020.27},
  URN =		{urn:nbn:de:0030-drops-123493},
  doi =		{10.4230/LIPIcs.FSCD.2020.27},
  annote =	{Keywords: game semantics, symbolic execution, higher-order open programs}
}
Document
Track A: Algorithms, Complexity and Games
Quantum SDP Solvers: Large Speed-Ups, Optimality, and Applications to Quantum Learning

Authors: Fernando G. S. L. Brandão, Amir Kalev, Tongyang Li, Cedric Yen-Yu Lin, Krysta M. Svore, and Xiaodi Wu

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
We give two new quantum algorithms for solving semidefinite programs (SDPs) providing quantum speed-ups. We consider SDP instances with m constraint matrices, each of dimension n, rank at most r, and sparsity s. The first algorithm assumes an input model where one is given access to an oracle to the entries of the matrices at unit cost. We show that it has run time O~(s^2 (sqrt{m} epsilon^{-10} + sqrt{n} epsilon^{-12})), with epsilon the error of the solution. This gives an optimal dependence in terms of m, n and quadratic improvement over previous quantum algorithms (when m ~~ n). The second algorithm assumes a fully quantum input model in which the input matrices are given as quantum states. We show that its run time is O~(sqrt{m}+poly(r))*poly(log m,log n,B,epsilon^{-1}), with B an upper bound on the trace-norm of all input matrices. In particular the complexity depends only polylogarithmically in n and polynomially in r. We apply the second SDP solver to learn a good description of a quantum state with respect to a set of measurements: Given m measurements and a supply of copies of an unknown state rho with rank at most r, we show we can find in time sqrt{m}*poly(log m,log n,r,epsilon^{-1}) a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as rho on the m measurements, up to error epsilon. The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes' principle from statistical mechanics. As in previous work, we obtain our algorithm by "quantizing" classical SDP solvers based on the matrix multiplicative weight update method. One of our main technical contributions is a quantum Gibbs state sampler for low-rank Hamiltonians, given quantum states encoding these Hamiltonians, with a poly-logarithmic dependence on its dimension, which is based on ideas developed in quantum principal component analysis. We also develop a "fast" quantum OR lemma with a quadratic improvement in gate complexity over the construction of Harrow et al. [Harrow et al., 2017]. We believe both techniques might be of independent interest.

Cite as

Fernando G. S. L. Brandão, Amir Kalev, Tongyang Li, Cedric Yen-Yu Lin, Krysta M. Svore, and Xiaodi Wu. Quantum SDP Solvers: Large Speed-Ups, Optimality, and Applications to Quantum Learning. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 27:1-27:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{brandao_et_al:LIPIcs.ICALP.2019.27,
  author =	{Brand\~{a}o, Fernando G. S. L. and Kalev, Amir and Li, Tongyang and Lin, Cedric Yen-Yu and Svore, Krysta M. and Wu, Xiaodi},
  title =	{{Quantum SDP Solvers: Large Speed-Ups, Optimality, and Applications to Quantum Learning}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{27:1--27:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.27},
  URN =		{urn:nbn:de:0030-drops-106036},
  doi =		{10.4230/LIPIcs.ICALP.2019.27},
  annote =	{Keywords: quantum algorithms, semidefinite program, convex optimization}
}
Document
The One-Way Communication Complexity of Dynamic Time Warping Distance

Authors: Vladimir Braverman, Moses Charikar, William Kuszmaul, David P. Woodruff, and Lin F. Yang

Published in: LIPIcs, Volume 129, 35th International Symposium on Computational Geometry (SoCG 2019)


Abstract
We resolve the randomized one-way communication complexity of Dynamic Time Warping (DTW) distance. We show that there is an efficient one-way communication protocol using O~(n/alpha) bits for the problem of computing an alpha-approximation for DTW between strings x and y of length n, and we prove a lower bound of Omega(n / alpha) bits for the same problem. Our communication protocol works for strings over an arbitrary metric of polynomial size and aspect ratio, and we optimize the logarithmic factors depending on properties of the underlying metric, such as when the points are low-dimensional integer vectors equipped with various metrics or have bounded doubling dimension. We also consider linear sketches of DTW, showing that such sketches must have size Omega(n).

Cite as

Vladimir Braverman, Moses Charikar, William Kuszmaul, David P. Woodruff, and Lin F. Yang. The One-Way Communication Complexity of Dynamic Time Warping Distance. In 35th International Symposium on Computational Geometry (SoCG 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 129, pp. 16:1-16:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.SoCG.2019.16,
  author =	{Braverman, Vladimir and Charikar, Moses and Kuszmaul, William and Woodruff, David P. and Yang, Lin F.},
  title =	{{The One-Way Communication Complexity of Dynamic Time Warping Distance}},
  booktitle =	{35th International Symposium on Computational Geometry (SoCG 2019)},
  pages =	{16:1--16:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-104-7},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{129},
  editor =	{Barequet, Gill and Wang, Yusu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2019.16},
  URN =		{urn:nbn:de:0030-drops-104203},
  doi =		{10.4230/LIPIcs.SoCG.2019.16},
  annote =	{Keywords: dynamic time warping, one-way communication complexity, tree metrics}
}
Document
Approximate Convex Hull of Data Streams

Authors: Avrim Blum, Vladimir Braverman, Ananya Kumar, Harry Lang, and Lin F. Yang

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
Given a finite set of points P subseteq R^d, we would like to find a small subset S subseteq P such that the convex hull of S approximately contains P. More formally, every point in P is within distance epsilon from the convex hull of S. Such a subset S is called an epsilon-hull. Computing an epsilon-hull is an important problem in computational geometry, machine learning, and approximation algorithms. In many applications, the set P is too large to fit in memory. We consider the streaming model where the algorithm receives the points of P sequentially and strives to use a minimal amount of memory. Existing streaming algorithms for computing an epsilon-hull require O(epsilon^{(1-d)/2}) space, which is optimal for a worst-case input. However, this ignores the structure of the data. The minimal size of an epsilon-hull of P, which we denote by OPT, can be much smaller. A natural question is whether a streaming algorithm can compute an epsilon-hull using only O(OPT) space. We begin with lower bounds that show, under a reasonable streaming model, that it is not possible to have a single-pass streaming algorithm that computes an epsilon-hull with O(OPT) space. We instead propose three relaxations of the problem for which we can compute epsilon-hulls using space near-linear to the optimal size. Our first algorithm for points in R^2 that arrive in random-order uses O(log n * OPT) space. Our second algorithm for points in R^2 makes O(log(epsilon^{-1})) passes before outputting the epsilon-hull and requires O(OPT) space. Our third algorithm, for points in R^d for any fixed dimension d, outputs, with high probability, an epsilon-hull for all but delta-fraction of directions and requires O(OPT * log OPT) space.

Cite as

Avrim Blum, Vladimir Braverman, Ananya Kumar, Harry Lang, and Lin F. Yang. Approximate Convex Hull of Data Streams. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 21:1-21:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{blum_et_al:LIPIcs.ICALP.2018.21,
  author =	{Blum, Avrim and Braverman, Vladimir and Kumar, Ananya and Lang, Harry and Yang, Lin F.},
  title =	{{Approximate Convex Hull of Data Streams}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{21:1--21:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.21},
  URN =		{urn:nbn:de:0030-drops-90254},
  doi =		{10.4230/LIPIcs.ICALP.2018.21},
  annote =	{Keywords: Convex Hulls, Streaming Algorithms, Epsilon Kernels, Sparse Coding}
}
Document
Revisiting Frequency Moment Estimation in Random Order Streams

Authors: Vladimir Braverman, Emanuele Viola, David P. Woodruff, and Lin F. Yang

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
We revisit one of the classic problems in the data stream literature, namely, that of estimating the frequency moments F_p for 0 < p < 2 of an underlying n-dimensional vector presented as a sequence of additive updates in a stream. It is well-known that using p-stable distributions one can approximate any of these moments up to a multiplicative (1+epsilon)-factor using O(epsilon^{-2} log n) bits of space, and this space bound is optimal up to a constant factor in the turnstile streaming model. We show that surprisingly, if one instead considers the popular random-order model of insertion-only streams, in which the updates to the underlying vector arrive in a random order, then one can beat this space bound and achieve O~(epsilon^{-2} + log n) bits of space, where the O~ hides poly(log(1/epsilon) + log log n) factors. If epsilon^{-2} ~~ log n, this represents a roughly quadratic improvement in the space achievable in turnstile streams. Our algorithm is in fact deterministic, and we show our space bound is optimal up to poly(log(1/epsilon) + log log n) factors for deterministic algorithms in the random order model. We also obtain a similar improvement in space for p = 2 whenever F_2 >~ log n * F_1.

Cite as

Vladimir Braverman, Emanuele Viola, David P. Woodruff, and Lin F. Yang. Revisiting Frequency Moment Estimation in Random Order Streams. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 25:1-25:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.ICALP.2018.25,
  author =	{Braverman, Vladimir and Viola, Emanuele and Woodruff, David P. and Yang, Lin F.},
  title =	{{Revisiting Frequency Moment Estimation in Random Order Streams}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{25:1--25:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.25},
  URN =		{urn:nbn:de:0030-drops-90294},
  doi =		{10.4230/LIPIcs.ICALP.2018.25},
  annote =	{Keywords: Data Stream, Frequency Moments, Random Order, Space Complexity, Insertion Only Stream}
}
Document
A Complete Characterization of Unitary Quantum Space

Authors: Bill Fefferman and Cedric Yen-Yu Lin

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
Motivated by understanding the power of quantum computation with restricted number of qubits, we give two complete characterizations of unitary quantum space bounded computation. First we show that approximating an element of the inverse of a well-conditioned efficiently encoded 2^k(n) x 2^k(n) matrix is complete for the class of problems solvable by quantum circuits acting on O(k(n)) qubits with all measurements at the end of the computation. Similarly, estimating the minimum eigenvalue of an efficiently encoded Hermitian 2^k(n) x 2^k(n) matrix is also complete for this class. In the logspace case, our results improve on previous results of Ta-Shma by giving new space-efficient quantum algorithms that avoid intermediate measurements, as well as showing matching hardness results. Additionally, as a consequence we show that preciseQMA, the version of QMA with exponentially small completeness-soundess gap, is equal to PSPACE. Thus, the problem of estimating the minimum eigenvalue of a local Hamiltonian to inverse exponential precision is PSPACE-complete, which we show holds even in the frustration-free case. Finally, we can use this characterization to give a provable setting in which the ability to prepare the ground state of a local Hamiltonian is more powerful than the ability to prepare PEPS states. Interestingly, by suitably changing the parameterization of either of these problems we can completely characterize the power of quantum computation with simultaneously bounded time and space.

Cite as

Bill Fefferman and Cedric Yen-Yu Lin. A Complete Characterization of Unitary Quantum Space. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 4:1-4:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{fefferman_et_al:LIPIcs.ITCS.2018.4,
  author =	{Fefferman, Bill and Lin, Cedric Yen-Yu},
  title =	{{A Complete Characterization of Unitary Quantum Space}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{4:1--4:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.4},
  URN =		{urn:nbn:de:0030-drops-83242},
  doi =		{10.4230/LIPIcs.ITCS.2018.4},
  annote =	{Keywords: Quantum complexity, space complexity, complete problems, QMA}
}
Document
Smart Contract Execution - the (+-)-Biased Ballot Problem

Authors: Lin Chen, Lei Xu, Zhimin Gao, Nolan Shah, Yang Lu, and Weidong Shi

Published in: LIPIcs, Volume 92, 28th International Symposium on Algorithms and Computation (ISAAC 2017)


Abstract
Transaction system build on top of blockchain, especially smart contract, is becoming an important part of world economy. However, there is a lack of formal study on the behavior of users in these systems, which leaves the correctness and security of such system without a solid foundation. Unlike mining, in which the reward for mining a block is fixed, different execution results of a smart contract may lead to significantly different payoffs of users, which gives more incentives for some user to follow a branch that contains a wrong result, even if the branch is shorter. It is thus important to understand the exact probability that a branch is being selected by the system. We formulate this problem as the (+-)-Biased Ballot Problem as follows: there are n voters one by one voting for either of the two candidates A and B. The probability of a user voting for A or B depends on whether the difference between the current votes of A and B is positive or negative. Our model takes into account the behavior of three different kinds of users when a branch occurs in the system -- users having preference over a certain branch based on the history of their transactions, and users being indifferent and simply follow the longest chain. We study two important probabilities that are closely related with a blockchain based system - the probability that A wins at last, and the probability that A receives d votes first. We show how to recursively calculate the two probabilities for any fixed n and d, and also discuss their asymptotic values when n and d are sufficiently large.

Cite as

Lin Chen, Lei Xu, Zhimin Gao, Nolan Shah, Yang Lu, and Weidong Shi. Smart Contract Execution - the (+-)-Biased Ballot Problem. In 28th International Symposium on Algorithms and Computation (ISAAC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 92, pp. 21:1-21:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ISAAC.2017.21,
  author =	{Chen, Lin and Xu, Lei and Gao, Zhimin and Shah, Nolan and Lu, Yang and Shi, Weidong},
  title =	{{Smart Contract Execution - the (+-)-Biased Ballot Problem}},
  booktitle =	{28th International Symposium on Algorithms and Computation (ISAAC 2017)},
  pages =	{21:1--21:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-054-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{92},
  editor =	{Okamoto, Yoshio and Tokuyama, Takeshi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2017.21},
  URN =		{urn:nbn:de:0030-drops-82388},
  doi =		{10.4230/LIPIcs.ISAAC.2017.21},
  annote =	{Keywords: Blockchain, Probability, Random Walk, Smart Contract}
}
Document
The (1|1)-Centroid Problem on the Plane Concerning Distance Constraints

Authors: Hung-I Yu, Tien-Ching Lin, and Der-Tsai Lee

Published in: LIPIcs, Volume 64, 27th International Symposium on Algorithms and Computation (ISAAC 2016)


Abstract
In 1982, Drezner proposed the (1|1)-centroid problem on the plane, in which two players, called the leader and the follower, open facilities to provide service to customers in a competitive manner. The leader opens the first facility, and then the follower opens the second. Each customer will patronize the facility closest to him (ties broken in favor of the leader's one), thereby decides the market share of the two players. The goal is to find the best position for the leader’s facility so that his market share is maximized. The best algorithm for this problem is an O(n^2 log n)-time parametric search approach, which searches over the space of possible market share values. In the same paper, Drezner also proposed a general version of (1|1)-centroid problem by introducing a minimal distance constraint R, such that the follower's facility is not allowed to be located within a distance R from the leader's. He proposed an O(n^5 log n)-time algorithm for this general version by identifying O(n^4) points as the candidates of the optimal solution and checking the market share for each of them. In this paper, we develop a new parametric search approach searching over the O(n^4) candidate points, and present an O(n^2 log n)-time algorithm for the general version, thereby closing the O(n^3) gap between the two bounds.

Cite as

Hung-I Yu, Tien-Ching Lin, and Der-Tsai Lee. The (1|1)-Centroid Problem on the Plane Concerning Distance Constraints. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 64:1-64:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{yu_et_al:LIPIcs.ISAAC.2016.64,
  author =	{Yu, Hung-I and Lin, Tien-Ching and Lee, Der-Tsai},
  title =	{{The (1|1)-Centroid Problem on the Plane Concerning Distance Constraints}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{64:1--64:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.64},
  URN =		{urn:nbn:de:0030-drops-68337},
  doi =		{10.4230/LIPIcs.ISAAC.2016.64},
  annote =	{Keywords: competitive facility, Euclidean plane, parametric search}
}
Document
Space-Efficient Error Reduction for Unitary Quantum Computations

Authors: Bill Fefferman, Hirotada Kobayashi, Cedric Yen-Yu Lin, Tomoyuki Morimae, and Harumichi Nishimura

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
This paper presents a general space-efficient method for error reduction for unitary quantum computation. Consider a polynomial-time quantum computation with completeness c and soundness s, either with or without a witness (corresponding to QMA and BQP, respectively). To convert this computation into a new computation with error at most 2^{-p}, the most space-efficient method known requires extra workspace of O(p*log(1/(c-s))) qubits. This space requirement is too large for scenarios like logarithmic-space quantum computations. This paper shows an errorreduction method for unitary quantum computations (i.e., computations without intermediate measurements) that requires extra workspace of just O(log(p/(c-s))) qubits. This in particular gives the first method of strong amplification for logarithmic-space unitary quantum computations with two-sided bounded error. This also leads to a number of consequences in complexity theory, such as the uselessness of quantum witnesses in bounded-error logarithmic-space unitary quantum computations, the PSPACE upper bound for QMA with exponentially-small completeness-soundness gap, and strong amplification for matchgate computations.

Cite as

Bill Fefferman, Hirotada Kobayashi, Cedric Yen-Yu Lin, Tomoyuki Morimae, and Harumichi Nishimura. Space-Efficient Error Reduction for Unitary Quantum Computations. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 14:1-14:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fefferman_et_al:LIPIcs.ICALP.2016.14,
  author =	{Fefferman, Bill and Kobayashi, Hirotada and Yen-Yu Lin, Cedric and Morimae, Tomoyuki and Nishimura, Harumichi},
  title =	{{Space-Efficient Error Reduction for Unitary Quantum Computations}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{14:1--14:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.14},
  URN =		{urn:nbn:de:0030-drops-62975},
  doi =		{10.4230/LIPIcs.ICALP.2016.14},
  annote =	{Keywords: space-bounded computation, quantum Merlin-Arthur proof systems, error reduction, quantum computing}
}
  • Refine by Author
  • 4 Lin, Cedric Yen-Yu
  • 3 Braverman, Vladimir
  • 3 Lin, Han-Hsuan
  • 3 Yang, Lin F.
  • 2 Chen, Lin
  • Show More...

  • Refine by Classification
  • 2 Applied computing → Bioinformatics
  • 2 Applied computing → Computational genomics
  • 2 Theory of computation → Quantum complexity theory
  • 2 Theory of computation → Sketching and sampling
  • 1 Computing methodologies → Feature selection
  • Show More...

  • Refine by Keyword
  • 2 Metagenomics binning
  • 2 Quantum Algorithms
  • 2 Query Complexity
  • 2 quantum computing
  • 1 Adversary Method
  • Show More...

  • Refine by Type
  • 18 document

  • Refine by Publication Year
  • 3 2018
  • 2 2015
  • 2 2016
  • 2 2019
  • 2 2020
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail