12 Search Results for "Zhang, Xue"


Document
Constraint Modelling with LLMs Using In-Context Learning

Authors: Kostis Michailidis, Dimos Tsouros, and Tias Guns

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
Constraint Programming (CP) allows for the modelling and solving of a wide range of combinatorial problems. However, modelling such problems using constraints over decision variables still requires significant expertise, both in conceptual thinking and syntactic use of modelling languages. In this work, we explore the potential of using pre-trained Large Language Models (LLMs) as coding assistants, to transform textual problem descriptions into concrete and executable CP specifications. We present different transformation pipelines with explicit intermediate representations, and we investigate the potential benefit of various retrieval-augmented example selection strategies for in-context learning. We evaluate our approach on 2 datasets from the literature, namely NL4Opt (optimisation) and Logic Grid Puzzles (satisfaction), and a heterogeneous set of exercises from a CP course. The results show that pre-trained LLMs have promising potential for initialising the modelling process, with retrieval-augmented in-context learning significantly enhancing their modelling capabilities.

Cite as

Kostis Michailidis, Dimos Tsouros, and Tias Guns. Constraint Modelling with LLMs Using In-Context Learning. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 20:1-20:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{michailidis_et_al:LIPIcs.CP.2024.20,
  author =	{Michailidis, Kostis and Tsouros, Dimos and Guns, Tias},
  title =	{{Constraint Modelling with LLMs Using In-Context Learning}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{20:1--20:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.20},
  URN =		{urn:nbn:de:0030-drops-207053},
  doi =		{10.4230/LIPIcs.CP.2024.20},
  annote =	{Keywords: Constraint Modelling, Constraint Acquisition, Constraint Programming, Large Language Models, In-Context Learning, Natural Language Processing, Named Entity Recognition, Retrieval-Augmented Generation, Optimisation}
}
Document
Combining Constraint Programming Reasoning with Large Language Model Predictions

Authors: Florian Régin, Elisabetta De Maria, and Alexandre Bonlarron

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
Constraint Programming (CP) and Machine Learning (ML) face challenges in text generation due to CP’s struggle with implementing "meaning" and ML’s difficulty with structural constraints. This paper proposes a solution by combining both approaches and embedding a Large Language Model (LLM) in CP. The LLM handles word generation and meaning, while CP manages structural constraints. This approach builds on GenCP, an improved version of On-the-fly Constraint Programming Search (OTFS) using LLM-generated domains. Compared to Beam Search (BS), a standard NLP method, this combined approach (GenCP with LLM) is faster and produces better results, ensuring all constraints are satisfied. This fusion of CP and ML presents new possibilities for enhancing text generation under constraints.

Cite as

Florian Régin, Elisabetta De Maria, and Alexandre Bonlarron. Combining Constraint Programming Reasoning with Large Language Model Predictions. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 25:1-25:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{regin_et_al:LIPIcs.CP.2024.25,
  author =	{R\'{e}gin, Florian and De Maria, Elisabetta and Bonlarron, Alexandre},
  title =	{{Combining Constraint Programming Reasoning with Large Language Model Predictions}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{25:1--25:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.25},
  URN =		{urn:nbn:de:0030-drops-207109},
  doi =		{10.4230/LIPIcs.CP.2024.25},
  annote =	{Keywords: Solver and Tools, ML-augmented CP, Constrained Text Generation, ML alongside CO}
}
Document
Multi-qubit Lattice Surgery Scheduling

Authors: Allyson Silva, Xiangyi Zhang, Zak Webb, Mia Kramer, Chan-Woo Yang, Xiao Liu, Jessica Lemieux, Ka-Wai Chen, Artur Scherer, and Pooya Ronagh

Published in: LIPIcs, Volume 310, 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)


Abstract
Fault-tolerant quantum computation using two-dimensional topological quantum error correcting codes can benefit from multi-qubit long-range operations. By using simple commutation rules, a quantum circuit can be transpiled into a sequence of solely non-Clifford multi-qubit gates. Prior work on fault-tolerant compilation avoids optimal scheduling of such gates since they reduce the parallelizability of the circuit. We observe that the reduced parallelization potential is outweighed by the significant reduction in the number of gates. We therefore devise a method for scheduling multi-qubit lattice surgery using an earliest-available-first policy, solving the associated forest packing problem using a representation of the multi-qubit gates as Steiner trees. Our extensive testing on random and various Hamiltonian simulation circuits demonstrates the method’s scalability and performance. We show that the transpilation significantly reduces the circuit length on the set of circuits tested, and that the resulting circuit of multi-qubit gates has a further reduction in the expected circuit execution time compared to serial execution.

Cite as

Allyson Silva, Xiangyi Zhang, Zak Webb, Mia Kramer, Chan-Woo Yang, Xiao Liu, Jessica Lemieux, Ka-Wai Chen, Artur Scherer, and Pooya Ronagh. Multi-qubit Lattice Surgery Scheduling. In 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 310, pp. 1:1-1:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{silva_et_al:LIPIcs.TQC.2024.1,
  author =	{Silva, Allyson and Zhang, Xiangyi and Webb, Zak and Kramer, Mia and Yang, Chan-Woo and Liu, Xiao and Lemieux, Jessica and Chen, Ka-Wai and Scherer, Artur and Ronagh, Pooya},
  title =	{{Multi-qubit Lattice Surgery Scheduling}},
  booktitle =	{19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)},
  pages =	{1:1--1:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-328-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{310},
  editor =	{Magniez, Fr\'{e}d\'{e}ric and Grilo, Alex Bredariol},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2024.1},
  URN =		{urn:nbn:de:0030-drops-206712},
  doi =		{10.4230/LIPIcs.TQC.2024.1},
  annote =	{Keywords: Scheduling, Large-Scale Optimization, Surface Code, Quantum Compilation, Circuit Optimization}
}
Document
Quantum Non-Identical Mean Estimation: Efficient Algorithms and Fundamental Limits

Authors: Jiachen Hu, Tongyang Li, Xinzhao Wang, Yecheng Xue, Chenyi Zhang, and Han Zhong

Published in: LIPIcs, Volume 310, 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)


Abstract
We systematically investigate quantum algorithms and lower bounds for mean estimation given query access to non-identically distributed samples. On the one hand, we give quantum mean estimators with quadratic quantum speed-up given samples from different bounded or sub-Gaussian random variables. On the other hand, we prove that, in general, it is impossible for any quantum algorithm to achieve quadratic speed-up over the number of classical samples needed to estimate the mean μ, where the samples come from different random variables with mean close to μ. Technically, our quantum algorithms reduce bounded and sub-Gaussian random variables to the Bernoulli case, and use an uncomputation trick to overcome the challenge that direct amplitude estimation does not work with non-identical query access. Our quantum query lower bounds are established by simulating non-identical oracles by parallel oracles, and also by an adversarial method with non-identical oracles. Both results pave the way for proving quantum query lower bounds with non-identical oracles in general, which may be of independent interest.

Cite as

Jiachen Hu, Tongyang Li, Xinzhao Wang, Yecheng Xue, Chenyi Zhang, and Han Zhong. Quantum Non-Identical Mean Estimation: Efficient Algorithms and Fundamental Limits. In 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 310, pp. 9:1-9:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hu_et_al:LIPIcs.TQC.2024.9,
  author =	{Hu, Jiachen and Li, Tongyang and Wang, Xinzhao and Xue, Yecheng and Zhang, Chenyi and Zhong, Han},
  title =	{{Quantum Non-Identical Mean Estimation: Efficient Algorithms and Fundamental Limits}},
  booktitle =	{19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)},
  pages =	{9:1--9:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-328-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{310},
  editor =	{Magniez, Fr\'{e}d\'{e}ric and Grilo, Alex Bredariol},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2024.9},
  URN =		{urn:nbn:de:0030-drops-206791},
  doi =		{10.4230/LIPIcs.TQC.2024.9},
  annote =	{Keywords: Quantum algorithms, Mean estimation, Non-identical samples, Query complexity}
}
Document
On Fourier Analysis of Sparse Boolean Functions over Certain Abelian Groups

Authors: Sourav Chakraborty, Swarnalipa Datta, Pranjal Dutta, Arijit Ghosh, and Swagato Sanyal

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
Given an Abelian group 𝒢, a Boolean-valued function f: 𝒢 → {-1,+1}, is said to be s-sparse, if it has at most s-many non-zero Fourier coefficients over the domain 𝒢. In a seminal paper, Gopalan et al. [Gopalan et al., 2011] proved "Granularity" for Fourier coefficients of Boolean valued functions over ℤ₂ⁿ, that have found many diverse applications in theoretical computer science and combinatorics. They also studied structural results for Boolean functions over ℤ₂ⁿ which are approximately Fourier-sparse. In this work, we obtain structural results for approximately Fourier-sparse Boolean valued functions over Abelian groups 𝒢 of the form, 𝒢: = ℤ_{p_1}^{n_1} × ⋯ × ℤ_{p_t}^{n_t}, for distinct primes p_i. We also obtain a lower bound of the form 1/(m²s)^⌈φ(m)/2⌉, on the absolute value of the smallest non-zero Fourier coefficient of an s-sparse function, where m = p_1 ⋯ p_t, and φ(m) = (p_1-1) ⋯ (p_t-1). We carefully apply probabilistic techniques from [Gopalan et al., 2011], to obtain our structural results, and use some non-trivial results from algebraic number theory to get the lower bound. We construct a family of at most s-sparse Boolean functions over ℤ_pⁿ, where p > 2, for arbitrarily large enough s, where the minimum non-zero Fourier coefficient is o(1/s). The "Granularity" result of Gopalan et al. implies that the absolute values of non-zero Fourier coefficients of any s-sparse Boolean valued function over ℤ₂ⁿ are Ω(1/s). So, our result shows that one cannot expect such a lower bound for general Abelian groups. Using our new structural results on the Fourier coefficients of sparse functions, we design an efficient sparsity testing algorithm for Boolean function, which tests whether the given function is s-sparse, or ε-far from any sparse Boolean function, and it requires poly((ms)^φ(m),1/ε)-many queries. Further, we generalize the notion of degree of a Boolean function over an Abelian group 𝒢. We use it to prove an Ω(√s) lower bound on the query complexity of any adaptive sparsity testing algorithm.

Cite as

Sourav Chakraborty, Swarnalipa Datta, Pranjal Dutta, Arijit Ghosh, and Swagato Sanyal. On Fourier Analysis of Sparse Boolean Functions over Certain Abelian Groups. In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 40:1-40:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.MFCS.2024.40,
  author =	{Chakraborty, Sourav and Datta, Swarnalipa and Dutta, Pranjal and Ghosh, Arijit and Sanyal, Swagato},
  title =	{{On Fourier Analysis of Sparse Boolean Functions over Certain Abelian Groups}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{40:1--40:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.40},
  URN =		{urn:nbn:de:0030-drops-205963},
  doi =		{10.4230/LIPIcs.MFCS.2024.40},
  annote =	{Keywords: Fourier coefficients, sparse, Abelian, granularity}
}
Document
On Line-Separable Weighted Unit-Disk Coverage and Related Problems

Authors: Gang Liu and Haitao Wang

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
Given a set P of n points and a set S of n weighted disks in the plane, the disk coverage problem is to compute a subset of disks of smallest total weight such that the union of the disks in the subset covers all points of P. The problem is NP-hard. In this paper, we consider a line-separable unit-disk version of the problem where all disks have the same radius and their centers are separated from the points of P by a line 𝓁. We present an O(n^{3/2}log² n) time algorithm for the problem. This improves the previously best work of O(n²log n) time. Our result leads to an algorithm of O(n^{7/2}log² n) time for the halfplane coverage problem (i.e., using n weighted halfplanes to cover n points), an improvement over the previous O(n⁴log n) time solution. If all halfplanes are lower ones, our algorithm runs in O(n^{3/2}log² n) time, while the previous best algorithm takes O(n²log n) time. Using duality, the hitting set problems under the same settings can be solved with similar time complexities.

Cite as

Gang Liu and Haitao Wang. On Line-Separable Weighted Unit-Disk Coverage and Related Problems. In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 70:1-70:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.MFCS.2024.70,
  author =	{Liu, Gang and Wang, Haitao},
  title =	{{On Line-Separable Weighted Unit-Disk Coverage and Related Problems}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{70:1--70:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.70},
  URN =		{urn:nbn:de:0030-drops-206265},
  doi =		{10.4230/LIPIcs.MFCS.2024.70},
  annote =	{Keywords: Line-separable, unit disks, halfplanes, geometric coverage, geometric hitting set}
}
Document
Faster Approximation Schemes for (Constrained) k-Means with Outliers

Authors: Zhen Zhang, Junyu Huang, and Qilong Feng

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
Given a set of n points in ℝ^d and two positive integers k and m, the Euclidean k-means with outliers problem aims to remove at most m points, referred to as outliers, and minimize the k-means cost function for the remaining points. Developing algorithms for this problem remains an active area of research due to its prevalence in applications involving noisy data. In this paper, we give a (1+ε)-approximation algorithm that runs in n²d((k+m)ε^{-1})^O(kε^{-1}) time for the problem. When combined with a coreset construction method, the running time of the algorithm can be improved to be linear in n. For the case where k is a constant, this represents the first polynomial-time approximation scheme for the problem: Existing algorithms with the same approximation guarantee run in polynomial time only when both k and m are constants. Furthermore, our approach generalizes to variants of k-means with outliers incorporating additional constraints on instances, such as those related to capacities and fairness.

Cite as

Zhen Zhang, Junyu Huang, and Qilong Feng. Faster Approximation Schemes for (Constrained) k-Means with Outliers. In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 84:1-84:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.MFCS.2024.84,
  author =	{Zhang, Zhen and Huang, Junyu and Feng, Qilong},
  title =	{{Faster Approximation Schemes for (Constrained) k-Means with Outliers}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{84:1--84:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.84},
  URN =		{urn:nbn:de:0030-drops-206408},
  doi =		{10.4230/LIPIcs.MFCS.2024.84},
  annote =	{Keywords: Approximation algorithms, clustering}
}
Document
Enhancing MaxSAT Local Search via a Unified Soft Clause Weighting Scheme

Authors: Yi Chu, Chu-Min Li, Furong Ye, and Shaowei Cai

Published in: LIPIcs, Volume 305, 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)


Abstract
Local search has been widely applied to solve the well-known (weighted) partial MaxSAT problem, significantly influencing many real-world applications. The main difficulty to overcome when designing a local search algorithm is that it can easily fall into local optima. Clause weighting is a beneficial technique that dynamically adjusts the landscape of search space to help the algorithm escape from local optima. Existing works tend to increase the weights of falsified clauses, and such strategies may result in an unpredictable landscape of search space during the optimization process. Therefore, in this paper, we propose a Unified Soft Clause Weighting Scheme called Unified-SW, which increases the weights of all soft clauses in feasible local optima, whether they are satisfied or not, while preserving the hierarchy among them. We implemented Unified-SW in a new local search solver called USW-LS. Experimental results demonstrate that USW-LS, outperforms the state-of-the-art local search solvers across benchmarks from anytime tracks of recent MaxSAT Evaluations. More promisingly, a hybrid solver combining USW-LS and TT-Open-WBO-Inc won all four categories in the anytime track of MaxSAT Evaluation 2023.

Cite as

Yi Chu, Chu-Min Li, Furong Ye, and Shaowei Cai. Enhancing MaxSAT Local Search via a Unified Soft Clause Weighting Scheme. In 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 305, pp. 8:1-8:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chu_et_al:LIPIcs.SAT.2024.8,
  author =	{Chu, Yi and Li, Chu-Min and Ye, Furong and Cai, Shaowei},
  title =	{{Enhancing MaxSAT Local Search via a Unified Soft Clause Weighting Scheme}},
  booktitle =	{27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)},
  pages =	{8:1--8:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-334-8},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{305},
  editor =	{Chakraborty, Supratik and Jiang, Jie-Hong Roland},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2024.8},
  URN =		{urn:nbn:de:0030-drops-205301},
  doi =		{10.4230/LIPIcs.SAT.2024.8},
  annote =	{Keywords: Weighted Partial MaxSAT, Local Search Method, Weighting Scheme}
}
Document
Faster Treewidth-Based Approximations for Wiener Index

Authors: Giovanna Kobus Conrado, Amir Kafshdar Goharshady, Pavel Hudec, Pingjiang Li, and Harshit Jitendra Motwani

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
The Wiener index of a graph G is the sum of distances between all pairs of its vertices. It is a widely-used graph property in chemistry, initially introduced to examine the link between boiling points and structural properties of alkanes, which later found notable applications in drug design. Thus, computing or approximating the Wiener index of molecular graphs, i.e. graphs in which every vertex models an atom of a molecule and every edge models a bond, is of significant interest to the computational chemistry community. In this work, we build upon the observation that molecular graphs are sparse and tree-like and focus on developing efficient algorithms parameterized by treewidth to approximate the Wiener index. We present a new randomized approximation algorithm using a combination of tree decompositions and centroid decompositions. Our algorithm approximates the Wiener index within any desired multiplicative factor (1 ± ε) in time O(n ⋅ log n ⋅ k³ + √n ⋅ k/ε²), where n is the number of vertices of the graph and k is the treewidth. This time bound is almost-linear in n. Finally, we provide experimental results over standard benchmark molecules from PubChem and the Protein Data Bank, showcasing the applicability and scalability of our approach on real-world chemical graphs and comparing it with previous methods.

Cite as

Giovanna Kobus Conrado, Amir Kafshdar Goharshady, Pavel Hudec, Pingjiang Li, and Harshit Jitendra Motwani. Faster Treewidth-Based Approximations for Wiener Index. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 6:1-6:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{conrado_et_al:LIPIcs.SEA.2024.6,
  author =	{Conrado, Giovanna Kobus and Goharshady, Amir Kafshdar and Hudec, Pavel and Li, Pingjiang and Motwani, Harshit Jitendra},
  title =	{{Faster Treewidth-Based Approximations for Wiener Index}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{6:1--6:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.6},
  URN =		{urn:nbn:de:0030-drops-203718},
  doi =		{10.4230/LIPIcs.SEA.2024.6},
  annote =	{Keywords: Computational Chemistry, Treewidth, Wiener Index}
}
Document
Program Tailoring: Slicing by Sequential Criteria

Authors: Yue Li, Tian Tan, Yifei Zhang, and Jingling Xue

Published in: LIPIcs, Volume 56, 30th European Conference on Object-Oriented Programming (ECOOP 2016)


Abstract
Protocol and typestate analyses often report some sequences of statements ending at a program point P that needs to be scrutinized, since P may be erroneous or imprecisely analyzed. Program slicing focuses only on the behavior at P by computing a slice of the program affecting the values at P. In this paper, we propose to restrict our attention to the subset of that behavior at P affected by one or several statement sequences, called a sequential criterion (SC). By leveraging the ordering information in a SC, e.g., the temporal order in a few valid/invalid API method invocation sequences, we introduce a new technique, program tailoring, to compute a tailored program that comprises the statements in all possible execution paths passing through at least one sequence in SC in the given order. With a prototyping implementation, Tailor, we show why tailoring is practically useful by conducting two case studies on seven large real-world Java applications. For program debugging and understanding, Tailor can complement program slicing by removing SC-irrelevant statements. For program analysis, Tailor can enable a pointer analysis, which is unscalable to a program, to perform a more focused and therefore potentially scalable analysis to its specific parts containing hard language features such as reflection.

Cite as

Yue Li, Tian Tan, Yifei Zhang, and Jingling Xue. Program Tailoring: Slicing by Sequential Criteria. In 30th European Conference on Object-Oriented Programming (ECOOP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 56, pp. 15:1-15:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ECOOP.2016.15,
  author =	{Li, Yue and Tan, Tian and Zhang, Yifei and Xue, Jingling},
  title =	{{Program Tailoring: Slicing by Sequential Criteria}},
  booktitle =	{30th European Conference on Object-Oriented Programming (ECOOP 2016)},
  pages =	{15:1--15:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-014-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{56},
  editor =	{Krishnamurthi, Shriram and Lerner, Benjamin S.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2016.15},
  URN =		{urn:nbn:de:0030-drops-61092},
  doi =		{10.4230/LIPIcs.ECOOP.2016.15},
  annote =	{Keywords: Program Slicing, Program Analysis, API Protocol Analysis}
}
Document
Program Tailoring: Slicing by Sequential Criteria (Artifact)

Authors: Tian Tan, Yue Li, Yifei Zhang, and Jingling Xue

Published in: DARTS, Volume 2, Issue 1, Special Issue of the 30th European Conference on Object-Oriented Programming (ECOOP 2016)


Abstract
Protocol and typestate analyses often report some sequences of statements ending at a program point P that needs to be scrutinized, since P may be erroneous or imprecisely analyzed. Program slicing focuses only on the behavior at P by computing a slice of the program affecting the values at P. In our companion paper "Program Tailoring: Slicing by Sequential Criteria", we propose to focus on the subset of that behavior at P affected by one or several statement sequences, called a sequential criterion (SC). By leveraging the ordering information in a SC, e.g., the temporal order in a few valid/invalid API method invocation sequences, we introduce a new technique, program tailoring, to compute a tailored program that comprises the statements in all possible execution paths passing through at least one sequence in SC in the given order. This artifact is based on TAILOR, a prototyping implementation of program tailoring, to evaluate the usefulness of TAILOR in practice. The provided package is designed to support repeatability of all the experiments of our companion paper. Specifically, it allows users to reproduce the results for all the three research questions addressed in the evaluation section of our companion paper. In addition, an extensive set of extra results, which are not described in the companion paper, are also included, in order to help users better understand this work.

Cite as

Tian Tan, Yue Li, Yifei Zhang, and Jingling Xue. Program Tailoring: Slicing by Sequential Criteria (Artifact). In Special Issue of the 30th European Conference on Object-Oriented Programming (ECOOP 2016). Dagstuhl Artifacts Series (DARTS), Volume 2, Issue 1, pp. 8:1-8:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Article{tan_et_al:DARTS.2.1.8,
  author =	{Tan, Tian and Li, Yue and Zhang, Yifei and Xue, Jingling},
  title =	{{Program Tailoring: Slicing by Sequential Criteria (Artifact)}},
  pages =	{8:1--8:3},
  journal =	{Dagstuhl Artifacts Series},
  ISSN =	{2509-8195},
  year =	{2016},
  volume =	{2},
  number =	{1},
  editor =	{Tan, Tian and Li, Yue and Zhang, Yifei and Xue, Jingling},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DARTS.2.1.8},
  URN =		{urn:nbn:de:0030-drops-61298},
  doi =		{10.4230/DARTS.2.1.8},
  annote =	{Keywords: Program Slicing, Program Analysis, API Protocol Specification}
}
Document
Complexity Classification of Two-Qubit Commuting Hamiltonians

Authors: Adam Bouland, Laura Mancinska, and Xue Zhang

Published in: LIPIcs, Volume 50, 31st Conference on Computational Complexity (CCC 2016)


Abstract
We classify two-qubit commuting Hamiltonians in terms of their computational complexity. Suppose one has a two-qubit commuting Hamiltonian H which one can apply to any pair of qubits, starting in a computational basis state. We prove a dichotomy theorem: either this model is efficiently classically simulable or it allows one to sample from probability distributions which cannot be sampled from classically unless the polynomial hierarchy collapses. Furthermore, the only simulable Hamiltonians are those which fail to generate entanglement. This shows that generic two-qubit commuting Hamiltonians can be used to perform computational tasks which are intractable for classical computers under plausible assumptions. Our proof makes use of new postselection gadgets and Lie theory.

Cite as

Adam Bouland, Laura Mancinska, and Xue Zhang. Complexity Classification of Two-Qubit Commuting Hamiltonians. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 28:1-28:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bouland_et_al:LIPIcs.CCC.2016.28,
  author =	{Bouland, Adam and Mancinska, Laura and Zhang, Xue},
  title =	{{Complexity Classification of Two-Qubit Commuting Hamiltonians}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{28:1--28:33},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.28},
  URN =		{urn:nbn:de:0030-drops-58469},
  doi =		{10.4230/LIPIcs.CCC.2016.28},
  annote =	{Keywords: Quantum Computing, Sampling Problems, Commuting Hamiltonians, IQP, Gate Classification Theorems}
}
  • Refine by Author
  • 2 Li, Yue
  • 2 Tan, Tian
  • 2 Xue, Jingling
  • 2 Zhang, Yifei
  • 1 Bonlarron, Alexandre
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Constraint and logic programming
  • 1 Computer systems organization → Quantum computing
  • 1 Computing methodologies → Discrete space search
  • 1 Computing methodologies → Natural language generation
  • 1 Hardware → Circuit optimization
  • Show More...

  • Refine by Keyword
  • 2 Program Analysis
  • 2 Program Slicing
  • 1 API Protocol Analysis
  • 1 API Protocol Specification
  • 1 Abelian
  • Show More...

  • Refine by Type
  • 12 document

  • Refine by Publication Year
  • 9 2024
  • 3 2016