Search Results

Documents authored by Zhang, An



Zhang, Eddy Z.

Document
Interarrival Times Characterization and Fitting for Markovian Traffic Analysis

Authors: Giuliano Casale, Eddy Z. Zhang, and Evgenia Smirni

Published in: Dagstuhl Seminar Proceedings, Volume 7461, Numerical Methods for Structured Markov Chains (2008)


Abstract
We propose a traffic fitting algorithm for Markovian Arrival Processes (MAPs) that can capture statistics of any order of interarrival times. By studying real traffic traces, we show that matching higher order properties, in addition to first and second order descriptors, results in increased queueing prediction accuracy with respect to other algorithms that only match the mean, coefficient of variation, and autocorrelations. The result promotes the idea of modeling traffic traces using the interarrival time process instead of the counting process that is more frequently employed in previous work, but for which higher order moments are difficult to manipulate. We proceed by first characterizing the general properties of MAPs using a spectral approach. Based on this characterization, we show how different MAP processes can be combined together using Kronecker products to define a larger MAP with predefined properties of interarrival times. We then devise an algorithm that is based on this Kronecker composition and can accurately fit traffic traces. The algorithm employs nonlinear optimization programs that can be customized to fit an arbitrary number of moments and to meet the desired cost-accuracy tradeoff. Numerical results of the fitting algorithm on real HTTP and TCP traffic data, such as the Bellcore Aug89 trace, indicate that the proposed fitting methods achieve increased prediction accuracy with respect to other state-of-the-art fitting methods.

Cite as

Giuliano Casale, Eddy Z. Zhang, and Evgenia Smirni. Interarrival Times Characterization and Fitting for Markovian Traffic Analysis. In Numerical Methods for Structured Markov Chains. Dagstuhl Seminar Proceedings, Volume 7461, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{casale_et_al:DagSemProc.07461.8,
  author =	{Casale, Giuliano and Zhang, Eddy Z. and Smirni, Evgenia},
  title =	{{Interarrival Times Characterization and Fitting for Markovian Traffic Analysis}},
  booktitle =	{Numerical Methods for Structured Markov Chains},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{7461},
  editor =	{Dario Bini and Beatrice Meini and Vaidyanathan Ramaswami and Marie-Ange Remiche and Peter Taylor},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.07461.8},
  URN =		{urn:nbn:de:0030-drops-13908},
  doi =		{10.4230/DagSemProc.07461.8},
  annote =	{Keywords: MAP fitting, interarrival time process, higher-order moments}
}

Zhang, An

Document
A 21/16-Approximation for the Minimum 3-Path Partition Problem

Authors: Yong Chen, Randy Goebel, Bing Su, Weitian Tong, Yao Xu, and An Zhang

Published in: LIPIcs, Volume 149, 30th International Symposium on Algorithms and Computation (ISAAC 2019)


Abstract
The minimum k-path partition (Min-k-PP for short) problem targets to partition an input graph into the smallest number of paths, each of which has order at most k. We focus on the special case when k=3. Existing literature mainly concentrates on the exact algorithms for special graphs, such as trees. Because of the challenge of NP-hardness on general graphs, the approximability of the Min-3-PP problem attracts researchers' attention. The first approximation algorithm dates back about 10 years and achieves an approximation ratio of 3/2, which was recently improved to 13/9 and further to 4/3. We investigate the 3/2-approximation algorithm for the Min-3-PP problem and discover several interesting structural properties. Instead of studying the unweighted Min-3-PP problem directly, we design a novel weight schema for l-paths, l in {1, 2, 3}, and investigate the weighted version. A greedy local search algorithm is proposed to generate a heavy path partition. We show the achieved path partition has the least 1-paths, which is also the key ingredient for the algorithms with ratios 13/9 and 4/3. When switching back to the unweighted objective function, we prove the approximation ratio 21/16 via amortized analysis.

Cite as

Yong Chen, Randy Goebel, Bing Su, Weitian Tong, Yao Xu, and An Zhang. A 21/16-Approximation for the Minimum 3-Path Partition Problem. In 30th International Symposium on Algorithms and Computation (ISAAC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 149, pp. 46:1-46:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ISAAC.2019.46,
  author =	{Chen, Yong and Goebel, Randy and Su, Bing and Tong, Weitian and Xu, Yao and Zhang, An},
  title =	{{A 21/16-Approximation for the Minimum 3-Path Partition Problem}},
  booktitle =	{30th International Symposium on Algorithms and Computation (ISAAC 2019)},
  pages =	{46:1--46:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-130-6},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{149},
  editor =	{Lu, Pinyan and Zhang, Guochuan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2019.46},
  URN =		{urn:nbn:de:0030-drops-115422},
  doi =		{10.4230/LIPIcs.ISAAC.2019.46},
  annote =	{Keywords: 3-path partition, exact set cover, approximation algorithm, local search, amortized analysis}
}

Zhang, Hongyu

Document
Platial k-Anonymity: Improving Location Anonymity Through Temporal Popularity Signatures

Authors: Grant McKenzie and Hongyu Zhang

Published in: LIPIcs, Volume 277, 12th International Conference on Geographic Information Science (GIScience 2023)


Abstract
While it is increasingly necessary in today’s digital society, sharing personal location information comes at a cost. Sharing one’s precise place of interest, e.g., Compass Coffee, enables a range of location-based services, but substantially reduces the individual’s privacy. Methods have been developed to obfuscate and anonymize location data while still maintaining a degree of utility. One such approach, spatial k-anonymity, aims to ensure an individual’s level of anonymity by reporting their location as a set of k potential locations rather than their actual location alone. Larger values of k increase spatial anonymity while decreasing the utility of the location information. Typical examples of spatial k-anonymized datasets present elements as simple geographic points with no attributes or contextual information. In this work, we demonstrate that the addition of publicly available contextual data can significantly reduce the anonymity of a k-anonymized dataset. Through the analysis of place type temporal visitation patterns, hours of operation, and popularity values, one’s anonymity can be decreased by more than 50 percent. We propose a platial k-anonymity approach that leverages a combination of temporal popularity signatures and reports the amount that k must increase in order to maintain a certain level of anonymity. Finally, a method for reporting platial k-anonymous regions is presented and the implications of our methods are discussed.

Cite as

Grant McKenzie and Hongyu Zhang. Platial k-Anonymity: Improving Location Anonymity Through Temporal Popularity Signatures. In 12th International Conference on Geographic Information Science (GIScience 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 277, pp. 9:1-9:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{mckenzie_et_al:LIPIcs.GIScience.2023.9,
  author =	{McKenzie, Grant and Zhang, Hongyu},
  title =	{{Platial k-Anonymity: Improving Location Anonymity Through Temporal Popularity Signatures}},
  booktitle =	{12th International Conference on Geographic Information Science (GIScience 2023)},
  pages =	{9:1--9:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-288-4},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{277},
  editor =	{Beecham, Roger and Long, Jed A. and Smith, Dianna and Zhao, Qunshan and Wise, Sarah},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GIScience.2023.9},
  URN =		{urn:nbn:de:0030-drops-189045},
  doi =		{10.4230/LIPIcs.GIScience.2023.9},
  annote =	{Keywords: location anonymity, location privacy, geoprivacy, place, temporal, geosocial}
}
Document
Do Bugs Propagate? An Empirical Analysis of Temporal Correlations Among Software Bugs

Authors: Xiaodong Gu, Yo-Sub Han, Sunghun Kim, and Hongyu Zhang

Published in: LIPIcs, Volume 194, 35th European Conference on Object-Oriented Programming (ECOOP 2021)


Abstract
The occurrences of bugs are not isolated events, rather they may interact, affect each other, and trigger other latent bugs. Identifying and understanding bug correlations could help developers localize bug origins, predict potential bugs, and design better architectures of software artifacts to prevent bug affection. Many studies in the defect prediction and fault localization literature implied the dependence and interactions between multiple bugs, but few of them explicitly investigate the correlations of bugs across time steps and how bugs affect each other. In this paper, we perform social network analysis on the temporal correlations between bugs across time steps on software artifact ties, i.e., software graphs. Adopted from the correlation analysis methodology in social networks, we construct software graphs of three artifact ties such as function calls and type hierarchy and then perform longitudinal logistic regressions of time-lag bug correlations on these graphs. Our experiments on four open-source projects suggest that bugs can propagate as observed on certain artifact tie graphs. Based on our findings, we propose a hybrid artifact tie graph, a synthesis of a few well-known software graphs, that exhibits a higher degree of bug propagation. Our findings shed light on research for better bug prediction and localization models and help developers to perform maintenance actions to prevent consequential bugs.

Cite as

Xiaodong Gu, Yo-Sub Han, Sunghun Kim, and Hongyu Zhang. Do Bugs Propagate? An Empirical Analysis of Temporal Correlations Among Software Bugs. In 35th European Conference on Object-Oriented Programming (ECOOP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 194, pp. 11:1-11:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{gu_et_al:LIPIcs.ECOOP.2021.11,
  author =	{Gu, Xiaodong and Han, Yo-Sub and Kim, Sunghun and Zhang, Hongyu},
  title =	{{Do Bugs Propagate? An Empirical Analysis of Temporal Correlations Among Software Bugs}},
  booktitle =	{35th European Conference on Object-Oriented Programming (ECOOP 2021)},
  pages =	{11:1--11:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-190-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{194},
  editor =	{M{\o}ller, Anders and Sridharan, Manu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2021.11},
  URN =		{urn:nbn:de:0030-drops-140540},
  doi =		{10.4230/LIPIcs.ECOOP.2021.11},
  annote =	{Keywords: empirical software engineering, bug propagation, software graph, bug correlation}
}
Document
Testing and Verification of Compilers (Dagstuhl Seminar 17502)

Authors: Junjie Chen, Alastair F. Donaldson, Andreas Zeller, and Hongyu Zhang

Published in: Dagstuhl Reports, Volume 7, Issue 12 (2018)


Abstract
This report documents the Dagstuhl Seminar 17502 "Testing and Verification of Compilers" that took place during December 10 to 13, 2017, which we provide as a resource for researchers who are interested in understanding the state of the art and open problems in this field, and applying them to this and other areas.

Cite as

Junjie Chen, Alastair F. Donaldson, Andreas Zeller, and Hongyu Zhang. Testing and Verification of Compilers (Dagstuhl Seminar 17502). In Dagstuhl Reports, Volume 7, Issue 12, pp. 50-65, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@Article{chen_et_al:DagRep.7.12.50,
  author =	{Chen, Junjie and Donaldson, Alastair F. and Zeller, Andreas and Zhang, Hongyu},
  title =	{{Testing and Verification of Compilers (Dagstuhl Seminar 17502)}},
  pages =	{50--65},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2018},
  volume =	{7},
  number =	{12},
  editor =	{Chen, Junjie and Donaldson, Alastair F. and Zeller, Andreas and Zhang, Hongyu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.7.12.50},
  URN =		{urn:nbn:de:0030-drops-86763},
  doi =		{10.4230/DagRep.7.12.50},
  annote =	{Keywords: code generation, compiler testing, compiler verification, program analysis, program optimization}
}

Zhang, Eugene

Document
Visualization and Processing of Anisotropy in Imaging, Geometry, and Astronomy (Dagstuhl Seminar 18442)

Authors: Andrea Fuster, Evren Özarslan, Thomas Schultz, and Eugene Zhang

Published in: Dagstuhl Reports, Volume 8, Issue 10 (2019)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 18442, "Visualization and Processing of Anisotropy in Imaging, Geometry, and Astronomy", which was attended by 30 international researchers, both junior and senior. Directional preferences or anisotropies are encountered across many different disciplines and spatial scales. These disciplines share a need for modeling, processing, and visualizing anisotropic quantities, which poses interesting challenges to applied computer science. With the goal of identifying open problems, making practitioners aware of existing solutions, and discovering synergies between different applications in which anisotropy arises, this seminar brought together researchers working on different aspects of computer science with experts from neuroimaging and astronomy. This report gathers abstracts of the talks held by the participants, as well as an account of topics raised within the breakout sessions.

Cite as

Andrea Fuster, Evren Özarslan, Thomas Schultz, and Eugene Zhang. Visualization and Processing of Anisotropy in Imaging, Geometry, and Astronomy (Dagstuhl Seminar 18442). In Dagstuhl Reports, Volume 8, Issue 10, pp. 148-172, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Article{fuster_et_al:DagRep.8.10.148,
  author =	{Fuster, Andrea and \"{O}zarslan, Evren and Schultz, Thomas and Zhang, Eugene},
  title =	{{Visualization and Processing of Anisotropy in Imaging, Geometry, and Astronomy (Dagstuhl Seminar 18442)}},
  pages =	{148--172},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2019},
  volume =	{8},
  number =	{10},
  editor =	{Fuster, Andrea and \"{O}zarslan, Evren and Schultz, Thomas and Zhang, Eugene},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.8.10.148},
  URN =		{urn:nbn:de:0030-drops-103524},
  doi =		{10.4230/DagRep.8.10.148},
  annote =	{Keywords: Anisotropy, astronomy, diffusion-weighted imaging (DWI), geometry processing, tensor fields, topology, visualization, uncertainty, shape modeling, microstructure imaging, statistical analysis}
}

Zhang, Yi

Document
A Generic User Interface Architecture for Analyzing Use Hazards in Infusion Pump Software

Authors: Paolo Masci, Yi Zhang, Paul Jones, Harold Thimbleby, and Paul Curzon

Published in: OASIcs, Volume 36, 5th Workshop on Medical Cyber-Physical Systems (2014)


Abstract
This paper presents a generic infusion pump user interface (GIP-UI) architecture that intends to capture the common characteristics and functionalities of interactive software incorporated in broad classes of infusion pumps. It is designed to facilitate the identification of use hazards and their causes in infusion pump designs. This architecture constitutes our first effort at establishing a model-based risk analysis methodology that helps manufacturers identify and mitigate use hazards in their products at early stages of the development life-cycle. The applicability of the GIP-UI architecture has been confirmed in a hazard analysis focusing on the number entry software of existing infusion pumps, in which the GIP-UI architecture is used to identify a substantial set of user interface design errors that may contribute to use hazards found in infusion pump incidents.

Cite as

Paolo Masci, Yi Zhang, Paul Jones, Harold Thimbleby, and Paul Curzon. A Generic User Interface Architecture for Analyzing Use Hazards in Infusion Pump Software. In 5th Workshop on Medical Cyber-Physical Systems. Open Access Series in Informatics (OASIcs), Volume 36, pp. 1-14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{masci_et_al:OASIcs.MCPS.2014.1,
  author =	{Masci, Paolo and Zhang, Yi and Jones, Paul and Thimbleby, Harold and Curzon, Paul},
  title =	{{A Generic User Interface Architecture for Analyzing Use Hazards in Infusion Pump Software}},
  booktitle =	{5th Workshop on Medical Cyber-Physical Systems},
  pages =	{1--14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-66-8},
  ISSN =	{2190-6807},
  year =	{2014},
  volume =	{36},
  editor =	{Turau, Volker and Kwiatkowska, Marta and Mangharam, Rahul and Weyer, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.MCPS.2014.1},
  URN =		{urn:nbn:de:0030-drops-45185},
  doi =		{10.4230/OASIcs.MCPS.2014.1},
  annote =	{Keywords: Infusion Pump, Hazard analysis, Use hazards, User Interface, Interactive software, Design errors}
}
Document
Towards the Implementation and Evaluation of Semi-Partitioned Multi-Core Scheduling

Authors: Yi Zhang, Nan Guan, and Wang Yi

Published in: OASIcs, Volume 18, Bringing Theory to Practice: Predictability and Performance in Embedded Systems (2011)


Abstract
Recent theoretical studies have shown that partitioning-based scheduling has better real-time performance than other scheduling paradigms like global scheduling on multi-cores. Especially, a class of partitioning-based scheduling algorithms (called semi-partitioned scheduling), which allow to split a small number of tasks among different cores, offer very high resource utilization, and appear to be a promising solution for scheduling real-time systems on multi-cores. The major concern about the semi-partitioned scheduling is that due to the task splitting, some tasks will migrate from one core to another at run time, and might incur higher context switch overhead than partitioned scheduling. So one would suspect whether the extra overhead caused by task splitting would counteract the theoretical performance gain of semi-partitioned scheduling. In this work, we implement a semi-partitioned scheduler in the Linux operating system, and run experiments on a Intel Core-i7 4-cores machine to measure the real overhead in both partitioned scheduling and semi-partitioned scheduling. Then we integrate the obtained overhead into the state-of-the-art partitioned scheduling and semi-partitioned scheduling algorithms, and conduct empirical comparison of their real-time performance. Our results show that the extra overhead caused by task splitting in semi-partitioned scheduling is very low, and its effect on the system schedulability is very small. Semi-partitioned scheduling indeed outperforms partitioned scheduling in realistic systems.

Cite as

Yi Zhang, Nan Guan, and Wang Yi. Towards the Implementation and Evaluation of Semi-Partitioned Multi-Core Scheduling. In Bringing Theory to Practice: Predictability and Performance in Embedded Systems. Open Access Series in Informatics (OASIcs), Volume 18, pp. 42-46, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:OASIcs.PPES.2011.42,
  author =	{Zhang, Yi and Guan, Nan and Yi, Wang},
  title =	{{Towards the Implementation and Evaluation of Semi-Partitioned Multi-Core Scheduling}},
  booktitle =	{Bringing Theory to Practice: Predictability and Performance in Embedded Systems},
  pages =	{42--46},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-28-6},
  ISSN =	{2190-6807},
  year =	{2011},
  volume =	{18},
  editor =	{Lucas, Philipp and Wilhelm, Reinhard},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.PPES.2011.42},
  URN =		{urn:nbn:de:0030-drops-30804},
  doi =		{10.4230/OASIcs.PPES.2011.42},
  annote =	{Keywords: real-time operating system, multi-core, semi-partitioned scheduling}
}
Document
P2P XQuery and the StreetTiVo application

Authors: Peter A. Boncz and Yi Zhang

Published in: Dagstuhl Seminar Proceedings, Volume 6431, Scalable Data Management in Evolving Networks (2007)


Abstract
MonetDB/XQuery* is a fully functional publicly available XML DBMS that has been extended with distributed and P2P data management functionality. Our (minimal) XQuery language extension XRPC adds the concept of RPC to XQuery, and we outlined our approach to include the services offered by diverse P2P network structures (such as DHTs), in a way that avoids any further intrusion in the XQuery language and semantics. We also discussed the StreetTiVo application were mxq is being used for data management in a large P2P environment. new construct called XRPC.

Cite as

Peter A. Boncz and Yi Zhang. P2P XQuery and the StreetTiVo application. In Scalable Data Management in Evolving Networks. Dagstuhl Seminar Proceedings, Volume 6431, p. 1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2007)


Copy BibTex To Clipboard

@InProceedings{boncz_et_al:DagSemProc.06431.6,
  author =	{Boncz, Peter A. and Zhang, Yi},
  title =	{{P2P XQuery and the StreetTiVo application}},
  booktitle =	{Scalable Data Management in Evolving Networks},
  pages =	{1--1},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2007},
  volume =	{6431},
  editor =	{Stefan B\"{o}ttcher and Le Gruenwald and Pedro Jose Marr\'{o}n and Evaggelia Pitoura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.06431.6},
  URN =		{urn:nbn:de:0030-drops-9499},
  doi =		{10.4230/DagSemProc.06431.6},
  annote =	{Keywords: Distributed XQuery, P2P, DHT}
}

Zhang, Jian

Document
Short Paper
Improving Local Search for Structured SAT Formulas via Unit Propagation Based Construct and Cut Initialization (Short Paper)

Authors: Shaowei Cai, Chuan Luo, Xindi Zhang, and Jian Zhang

Published in: LIPIcs, Volume 210, 27th International Conference on Principles and Practice of Constraint Programming (CP 2021)


Abstract
This work is dedicated to improving local search solvers for the Boolean satisfiability (SAT) problem on structured instances. We propose a construct-and-cut (CnC) algorithm based on unit propagation, which is used to produce initial assignments for local search. We integrate our CnC initialization procedure within several state-of-the-art local search SAT solvers, and obtain the improved solvers. Experiments are carried out with a benchmark encoded from a spectrum repacking project as well as benchmarks encoded from two important mathematical problems namely Boolean Pythagorean Triple and Schur Number Five. The experiments show that the CnC initialization improves the local search solvers, leading to better performance than state-of-the-art SAT solvers based on Conflict Driven Clause Learning (CDCL) solvers.

Cite as

Shaowei Cai, Chuan Luo, Xindi Zhang, and Jian Zhang. Improving Local Search for Structured SAT Formulas via Unit Propagation Based Construct and Cut Initialization (Short Paper). In 27th International Conference on Principles and Practice of Constraint Programming (CP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 210, pp. 5:1-5:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.CP.2021.5,
  author =	{Cai, Shaowei and Luo, Chuan and Zhang, Xindi and Zhang, Jian},
  title =	{{Improving Local Search for Structured SAT Formulas via Unit Propagation Based Construct and Cut Initialization}},
  booktitle =	{27th International Conference on Principles and Practice of Constraint Programming (CP 2021)},
  pages =	{5:1--5:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-211-2},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{210},
  editor =	{Michel, Laurent D.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2021.5},
  URN =		{urn:nbn:de:0030-drops-152969},
  doi =		{10.4230/LIPIcs.CP.2021.5},
  annote =	{Keywords: Satisfiability, Local Search, Unit Propagation, Mathematical Problems}
}
Document
Automatic Transformation of Raw Clinical Data Into Clean Data Using Decision Tree Learning Combining with String Similarity Algorithm

Authors: Jian Zhang

Published in: OASIcs, Volume 49, 2015 Imperial College Computing Student Workshop (ICCSW 2015)


Abstract
It is challenging to conduct statistical analyses of complex scientific datasets. It is a timeconsuming process to find the relationships within data for whether a scientist or a statistician. The process involves preprocessing the raw data, the selection of appropriate statistics, performing analysis and providing correct interpretations, among which, the data pre-processing is tedious and a particular time drain. In a large amount of data provided for analysis, there is not a standard for recording the information, and some errors either of spelling, typing or transmission. Thus, there will be many expressions for the same meaning in the data, but it will be impossible for analysis system to automatically deal with these inaccuracies. What is needed is an automatic method for transforming the raw clinical data into data which it is possible to process automatically. In this paper we propose a method combining decision tree learning with the string similarity algorithm, which is fast and accuracy to clinical data cleaning. Experimental results show that it outperforms individual string similarity algorithms and traditional data cleaning process.

Cite as

Jian Zhang. Automatic Transformation of Raw Clinical Data Into Clean Data Using Decision Tree Learning Combining with String Similarity Algorithm. In 2015 Imperial College Computing Student Workshop (ICCSW 2015). Open Access Series in Informatics (OASIcs), Volume 49, pp. 87-94, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{zhang:OASIcs.ICCSW.2015.87,
  author =	{Zhang, Jian},
  title =	{{Automatic Transformation of Raw Clinical Data Into Clean Data Using Decision Tree Learning Combining with String Similarity Algorithm}},
  booktitle =	{2015 Imperial College Computing Student Workshop (ICCSW 2015)},
  pages =	{87--94},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-000-2},
  ISSN =	{2190-6807},
  year =	{2015},
  volume =	{49},
  editor =	{Schulz, Claudia and Liew, Daniel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2015.87},
  URN =		{urn:nbn:de:0030-drops-54850},
  doi =		{10.4230/OASIcs.ICCSW.2015.87},
  annote =	{Keywords: Raw Clinical Data, Decision Tree Learning, String Similarity Algorithm}
}

Zhang, Hanjing

Document
Simulation Combined Approach to Police Patrol Services Staffing

Authors: Hanjing Zhang, Antuela Tako, Lisa M. Jackson, and Jiyin Liu

Published in: OASIcs, Volume 50, 5th Student Conference on Operational Research (SCOR 2016)


Abstract
Motivated by the squeeze on public service expenditure, staffing is an important issue for service systems, which are required to maintain or even improve their service levels in order to meet general public demand. This paper considers Police Patrol Service Systems (PPSSs) where staffing issues are extremely serious and important because they have an impact on service costs, quality and public-safety. Police patrol service systems are of particularly interest because the demand for service exhibits large time-varying characteristics. In this case, incidents with different urgent grades have different targets of patrol officers’ immediate attendances. A new method is proposed which aims to determine appropriate staffing levels. This method starts at a refinement of the Square Root Staffing (SRS) algorithm which introduces the possibility of a delay in responding to a priority incident. Simulation of queueing systems will then be implemented to indicate modifications in shift schedules. The proposed method is proved to be effective on a test instance generated from real patrol activity records in a local police force.

Cite as

Hanjing Zhang, Antuela Tako, Lisa M. Jackson, and Jiyin Liu. Simulation Combined Approach to Police Patrol Services Staffing. In 5th Student Conference on Operational Research (SCOR 2016). Open Access Series in Informatics (OASIcs), Volume 50, pp. 4:1-4:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:OASIcs.SCOR.2016.4,
  author =	{Zhang, Hanjing and Tako, Antuela and Jackson, Lisa M. and Liu, Jiyin},
  title =	{{Simulation Combined Approach to Police Patrol Services Staffing}},
  booktitle =	{5th Student Conference on Operational Research (SCOR 2016)},
  pages =	{4:1--4:11},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-004-0},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{50},
  editor =	{Hardy, Bradley and Qazi, Abroon and Ravizza, Stefan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SCOR.2016.4},
  URN =		{urn:nbn:de:0030-drops-65168},
  doi =		{10.4230/OASIcs.SCOR.2016.4},
  annote =	{Keywords: Police patrol service system, Time dependent queue, Priority queue, Square root staffing, Simulation}
}

Zhang, Qinglin

Document
Understanding Restaurant Stories Using an ASP Theory of Intentions

Authors: Daniela Inclezan, Qinglin Zhang, Marcello Balduccini, and Ankush Israney

Published in: OASIcs, Volume 58, Technical Communications of the 33rd International Conference on Logic Programming (ICLP 2017)


Abstract
The paper describes an application of logic programming to story understanding. Substantial work in this direction has been done by Erik Mueller, who focused on texts about stereotypical activities (or scripts), in particular restaurant stories. His system performed well, but could not understand texts describing exceptional scenarios. We propose addressing this problem by using a theory of intentions developed by Blount, Gelfond, and Balduccini. We present a methodology in which we model scripts as activities and employ the concept of an intentional agent to reason about both normal and exceptional scenarios.

Cite as

Daniela Inclezan, Qinglin Zhang, Marcello Balduccini, and Ankush Israney. Understanding Restaurant Stories Using an ASP Theory of Intentions. In Technical Communications of the 33rd International Conference on Logic Programming (ICLP 2017). Open Access Series in Informatics (OASIcs), Volume 58, pp. 7:1-7:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{inclezan_et_al:OASIcs.ICLP.2017.7,
  author =	{Inclezan, Daniela and Zhang, Qinglin and Balduccini, Marcello and Israney, Ankush},
  title =	{{Understanding Restaurant Stories Using an ASP Theory of Intentions}},
  booktitle =	{Technical Communications of the 33rd International Conference on Logic Programming (ICLP 2017)},
  pages =	{7:1--7:4},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-058-3},
  ISSN =	{2190-6807},
  year =	{2018},
  volume =	{58},
  editor =	{Rocha, Ricardo and Son, Tran Cao and Mears, Christopher and Saeedloei, Neda},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICLP.2017.7},
  URN =		{urn:nbn:de:0030-drops-84638},
  doi =		{10.4230/OASIcs.ICLP.2017.7},
  annote =	{Keywords: answer set programming, story understanding, theory of intentions}
}

Zhang, Zhizheng

Document
Introspecting Preferences in Answer Set Programming

Authors: Zhizheng Zhang

Published in: OASIcs, Volume 64, Technical Communications of the 34th International Conference on Logic Programming (ICLP 2018)


Abstract
This paper develops a logic programming language, ASP^EP, that extends answer set programming language with a new epistemic operator >~_x where x in {#,supseteq}. The operator are used between two literals in rules bodies, and thus allows for the representation of introspections of preferences in the presence of multiple belief sets: G >~_# F expresses that G is preferred to F by the cardinality of the sets, and G >~_supseteq F expresses G is preferred to F by the set-theoretic inclusion. We define the semantics of ASP^EP, explore the relation to the languages of strong introspections, and study the applications of ASP^EP by modeling the Monty Hall problem and the principle of majority.

Cite as

Zhizheng Zhang. Introspecting Preferences in Answer Set Programming. In Technical Communications of the 34th International Conference on Logic Programming (ICLP 2018). Open Access Series in Informatics (OASIcs), Volume 64, pp. 3:1-3:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{zhang:OASIcs.ICLP.2018.3,
  author =	{Zhang, Zhizheng},
  title =	{{Introspecting Preferences in Answer Set Programming}},
  booktitle =	{Technical Communications of the 34th International Conference on Logic Programming (ICLP 2018)},
  pages =	{3:1--3:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-090-3},
  ISSN =	{2190-6807},
  year =	{2018},
  volume =	{64},
  editor =	{Dal Palu', Alessandro and Tarau, Paul and Saeedloei, Neda and Fodor, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICLP.2018.3},
  URN =		{urn:nbn:de:0030-drops-98694},
  doi =		{10.4230/OASIcs.ICLP.2018.3},
  annote =	{Keywords: Answer Set, Preference, Introspection}
}

Zhang, Chongchong

Document
Fog Network Task Scheduling for IoT Applications

Authors: Chongchong Zhang, Fei Shen, Jiong Jin, and Yang Yang

Published in: OASIcs, Volume 80, 2nd Workshop on Fog Computing and the IoT (Fog-IoT 2020)


Abstract
In the Internet of Things (IoT) networks, the data traffic would be very bursty and unpredictable. It is therefore very difficult to analyze and guarantee the delay performance for delay-sensitive IoT applications in fog networks, such as emergency monitoring, intelligent manufacturing, and autonomous driving. To address this challenging problem, a Bursty Elastic Task Scheduling (BETS) algorithm is developed to best accommodate bursty task arrivals and various requirements in IoT networks, thus optimizing service experience for delay-sensitive applications with only limited communication resources in time-varying and competing environments. To better describe the stability and consistence of Quality of Service (QoS) in realistic scenarios, a new performance metric "Bursty Service Experience Index (BSEI)" is defined and quantified as delay jitter normalized by the average delay. Finally, the numeral results shows that the performance of BETS is fully evaluated, which can achieve 5-10 times lower BSEI than traditional task scheduling algorithms, e.g. Proportional Fair (PF) and the Max Carrier-to-Interference ratio (MCI), under bursty traffic conditions. These results demonstrate that BETS can effectively smooth down the bursty characteristics in IoT networks, and provide much predictable and acceptable QoS for delay-sensitive applications.

Cite as

Chongchong Zhang, Fei Shen, Jiong Jin, and Yang Yang. Fog Network Task Scheduling for IoT Applications. In 2nd Workshop on Fog Computing and the IoT (Fog-IoT 2020). Open Access Series in Informatics (OASIcs), Volume 80, pp. 10:1-10:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:OASIcs.Fog-IoT.2020.10,
  author =	{Zhang, Chongchong and Shen, Fei and Jin, Jiong and Yang, Yang},
  title =	{{Fog Network Task Scheduling for IoT Applications}},
  booktitle =	{2nd Workshop on Fog Computing and the IoT (Fog-IoT 2020)},
  pages =	{10:1--10:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-144-3},
  ISSN =	{2190-6807},
  year =	{2020},
  volume =	{80},
  editor =	{Cervin, Anton and Yang, Yang},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.Fog-IoT.2020.10},
  URN =		{urn:nbn:de:0030-drops-120049},
  doi =		{10.4230/OASIcs.Fog-IoT.2020.10},
  annote =	{Keywords: Task Scheduling, Internet of Things, fog network, delay sensitive}
}

Zhang, Tiehua

Document
Realizing Video Analytic Service in the Fog-Based Infrastructure-Less Environments

Authors: Qiushi Zheng, Jiong Jin, Tiehua Zhang, Longxiang Gao, and Yong Xiang

Published in: OASIcs, Volume 80, 2nd Workshop on Fog Computing and the IoT (Fog-IoT 2020)


Abstract
Deep learning has unleashed the great potential in many fields and now is the most significant facilitator for video analytics owing to its capability to providing more intelligent services in a complex scenario. Meanwhile, the emergence of fog computing has brought unprecedented opportunities to provision intelligence services in infrastructure-less environments like remote national parks and rural farms. However, most of the deep learning algorithms are computationally intensive and impossible to be executed in such environments due to the needed supports from the cloud. In this paper, we develop a video analytic framework, which is tailored particularly for the fog devices to realize video analytic service in a rapid manner. Also, the convolution neural networks are used as the core processing unit in the framework to facilitate the image analysing process.

Cite as

Qiushi Zheng, Jiong Jin, Tiehua Zhang, Longxiang Gao, and Yong Xiang. Realizing Video Analytic Service in the Fog-Based Infrastructure-Less Environments. In 2nd Workshop on Fog Computing and the IoT (Fog-IoT 2020). Open Access Series in Informatics (OASIcs), Volume 80, pp. 11:1-11:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{zheng_et_al:OASIcs.Fog-IoT.2020.11,
  author =	{Zheng, Qiushi and Jin, Jiong and Zhang, Tiehua and Gao, Longxiang and Xiang, Yong},
  title =	{{Realizing Video Analytic Service in the Fog-Based Infrastructure-Less Environments}},
  booktitle =	{2nd Workshop on Fog Computing and the IoT (Fog-IoT 2020)},
  pages =	{11:1--11:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-144-3},
  ISSN =	{2190-6807},
  year =	{2020},
  volume =	{80},
  editor =	{Cervin, Anton and Yang, Yang},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.Fog-IoT.2020.11},
  URN =		{urn:nbn:de:0030-drops-120050},
  doi =		{10.4230/OASIcs.Fog-IoT.2020.11},
  annote =	{Keywords: Fog Computing, Convolution Neural Network, Infrastructure-less Environment}
}

Zhang, Kuize

Document
State-Based Opacity of Real-Time Automata

Authors: Kuize Zhang

Published in: OASIcs, Volume 90, 27th IFIP WG 1.5 International Workshop on Cellular Automata and Discrete Complex Systems (AUTOMATA 2021)


Abstract
State-based opacity is a special type of opacity as a confidentiality property, which describes whether an external intruder cannot make for sure whether secret states of a system have been visited by observing generated outputs, given that the intruder knows complete knowledge of the system’s structure but can only see generated outputs. When the time of visiting secret states is specified as the initial time, the current time, any past time, and at most K steps prior to the current time, the notions of state-based opacity can be formulated as initial-state opacity, current-state opacity, infinite-step opacity, and K-step opacity, respectively. In this paper, we formulate the four versions of opacity for real-time automata which are a widely-used model of real-time systems, and give 2-EXPTIME verification algorithms for the four notions by defining appropriate notions of observer and reverse observer for real-time automata that are computable in 2-EXPTIME.

Cite as

Kuize Zhang. State-Based Opacity of Real-Time Automata. In 27th IFIP WG 1.5 International Workshop on Cellular Automata and Discrete Complex Systems (AUTOMATA 2021). Open Access Series in Informatics (OASIcs), Volume 90, pp. 12:1-12:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{zhang:OASIcs.AUTOMATA.2021.12,
  author =	{Zhang, Kuize},
  title =	{{State-Based Opacity of Real-Time Automata}},
  booktitle =	{27th IFIP WG 1.5 International Workshop on Cellular Automata and Discrete Complex Systems (AUTOMATA 2021)},
  pages =	{12:1--12:15},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-189-4},
  ISSN =	{2190-6807},
  year =	{2021},
  volume =	{90},
  editor =	{Castillo-Ramirez, Alonso and Guillon, Pierre and Perrot, K\'{e}vin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.AUTOMATA.2021.12},
  URN =		{urn:nbn:de:0030-drops-140217},
  doi =		{10.4230/OASIcs.AUTOMATA.2021.12},
  annote =	{Keywords: real-time automaton, state-based opacity, observer, verification}
}

Zhang, Yifei

Document
Program Tailoring: Slicing by Sequential Criteria

Authors: Yue Li, Tian Tan, Yifei Zhang, and Jingling Xue

Published in: LIPIcs, Volume 56, 30th European Conference on Object-Oriented Programming (ECOOP 2016)


Abstract
Protocol and typestate analyses often report some sequences of statements ending at a program point P that needs to be scrutinized, since P may be erroneous or imprecisely analyzed. Program slicing focuses only on the behavior at P by computing a slice of the program affecting the values at P. In this paper, we propose to restrict our attention to the subset of that behavior at P affected by one or several statement sequences, called a sequential criterion (SC). By leveraging the ordering information in a SC, e.g., the temporal order in a few valid/invalid API method invocation sequences, we introduce a new technique, program tailoring, to compute a tailored program that comprises the statements in all possible execution paths passing through at least one sequence in SC in the given order. With a prototyping implementation, Tailor, we show why tailoring is practically useful by conducting two case studies on seven large real-world Java applications. For program debugging and understanding, Tailor can complement program slicing by removing SC-irrelevant statements. For program analysis, Tailor can enable a pointer analysis, which is unscalable to a program, to perform a more focused and therefore potentially scalable analysis to its specific parts containing hard language features such as reflection.

Cite as

Yue Li, Tian Tan, Yifei Zhang, and Jingling Xue. Program Tailoring: Slicing by Sequential Criteria. In 30th European Conference on Object-Oriented Programming (ECOOP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 56, pp. 15:1-15:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ECOOP.2016.15,
  author =	{Li, Yue and Tan, Tian and Zhang, Yifei and Xue, Jingling},
  title =	{{Program Tailoring: Slicing by Sequential Criteria}},
  booktitle =	{30th European Conference on Object-Oriented Programming (ECOOP 2016)},
  pages =	{15:1--15:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-014-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{56},
  editor =	{Krishnamurthi, Shriram and Lerner, Benjamin S.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2016.15},
  URN =		{urn:nbn:de:0030-drops-61092},
  doi =		{10.4230/LIPIcs.ECOOP.2016.15},
  annote =	{Keywords: Program Slicing, Program Analysis, API Protocol Analysis}
}
Document
Program Tailoring: Slicing by Sequential Criteria (Artifact)

Authors: Tian Tan, Yue Li, Yifei Zhang, and Jingling Xue

Published in: DARTS, Volume 2, Issue 1, Special Issue of the 30th European Conference on Object-Oriented Programming (ECOOP 2016)


Abstract
Protocol and typestate analyses often report some sequences of statements ending at a program point P that needs to be scrutinized, since P may be erroneous or imprecisely analyzed. Program slicing focuses only on the behavior at P by computing a slice of the program affecting the values at P. In our companion paper "Program Tailoring: Slicing by Sequential Criteria", we propose to focus on the subset of that behavior at P affected by one or several statement sequences, called a sequential criterion (SC). By leveraging the ordering information in a SC, e.g., the temporal order in a few valid/invalid API method invocation sequences, we introduce a new technique, program tailoring, to compute a tailored program that comprises the statements in all possible execution paths passing through at least one sequence in SC in the given order. This artifact is based on TAILOR, a prototyping implementation of program tailoring, to evaluate the usefulness of TAILOR in practice. The provided package is designed to support repeatability of all the experiments of our companion paper. Specifically, it allows users to reproduce the results for all the three research questions addressed in the evaluation section of our companion paper. In addition, an extensive set of extra results, which are not described in the companion paper, are also included, in order to help users better understand this work.

Cite as

Tian Tan, Yue Li, Yifei Zhang, and Jingling Xue. Program Tailoring: Slicing by Sequential Criteria (Artifact). In Special Issue of the 30th European Conference on Object-Oriented Programming (ECOOP 2016). Dagstuhl Artifacts Series (DARTS), Volume 2, Issue 1, pp. 8:1-8:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Article{tan_et_al:DARTS.2.1.8,
  author =	{Tan, Tian and Li, Yue and Zhang, Yifei and Xue, Jingling},
  title =	{{Program Tailoring: Slicing by Sequential Criteria (Artifact)}},
  pages =	{8:1--8:3},
  journal =	{Dagstuhl Artifacts Series},
  ISSN =	{2509-8195},
  year =	{2016},
  volume =	{2},
  number =	{1},
  editor =	{Tan, Tian and Li, Yue and Zhang, Yifei and Xue, Jingling},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DARTS.2.1.8},
  URN =		{urn:nbn:de:0030-drops-61298},
  doi =		{10.4230/DARTS.2.1.8},
  annote =	{Keywords: Program Slicing, Program Analysis, API Protocol Specification}
}

Zhang, Bo

Document
Hadoop-Benchmark: Rapid Prototyping and Evaluation of Self-Adaptive Behaviors in Hadoop Clusters (Artifact)

Authors: Bo Zhang, Filip Krikava, Romain Rouvoy, and Lionel Seinturier

Published in: DARTS, Volume 3, Issue 1, Special Issue of the 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS 2017)


Abstract
Arising with the popularity of Hadoop, optimizing Hadoop executions has grabbed lots of attention from research community. Many research contributions are proposed to elevate Hadoop performance, particularly in the domain of self-adaptive software systems. However, due to the complexity of Hadoop operation and the difficulty to reproduce experiments, the efforts of these Hadoop-related research are hard to be evaluated. To address this limitation, we propose a research acceleration platform for rapid prototyping and evaluation of self-adaptive behavior in Hadoop clusters. It provides an automated manner to quickly and easily provision reproducible Hadoop environments and execute acknowledged benchmarks. This platform is based on the state-of-the-art container technology that supports both distributed configurations as well as standalone single-host setups. We demonstrate the approach on a complete implementation of a concrete Hadoop self-adaptive case study.

Cite as

Bo Zhang, Filip Krikava, Romain Rouvoy, and Lionel Seinturier. Hadoop-Benchmark: Rapid Prototyping and Evaluation of Self-Adaptive Behaviors in Hadoop Clusters (Artifact). In Special Issue of the 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS 2017). Dagstuhl Artifacts Series (DARTS), Volume 3, Issue 1, pp. 1:1-1:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@Article{zhang_et_al:DARTS.3.1.1,
  author =	{Zhang, Bo and Krikava, Filip and Rouvoy, Romain and Seinturier, Lionel},
  title =	{{Hadoop-Benchmark: Rapid Prototyping and Evaluation of Self-Adaptive Behaviors in Hadoop Clusters (Artifact)}},
  pages =	{1:1--1:3},
  journal =	{Dagstuhl Artifacts Series},
  ISSN =	{2509-8195},
  year =	{2017},
  volume =	{3},
  number =	{1},
  editor =	{Zhang, Bo and Krikava, Filip and Rouvoy, Romain and Seinturier, Lionel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DARTS.3.1.1},
  URN =		{urn:nbn:de:0030-drops-71392},
  doi =		{10.4230/DARTS.3.1.1},
  annote =	{Keywords: Hadoop, Docker, Rapid Prototyping, Benchmark}
}

Zhang, Weixin

Document
Artifact
Compositional Programming (Artifact)

Authors: Weixin Zhang, Yaozhu Sun, and Bruno C. d. S. Oliveira

Published in: DARTS, Volume 7, Issue 2, Special Issue of the 35th European Conference on Object-Oriented Programming (ECOOP 2021)


Abstract
Our main paper presents CP, a Compositional Programming language in a statically typed modular programming style. This artifact includes its Haskell implementation, together with several examples and three case studies written in CP. All code snippets in our main paper can be type-checked and run using our CP interpreter.

Cite as

Weixin Zhang, Yaozhu Sun, and Bruno C. d. S. Oliveira. Compositional Programming (Artifact). In Special Issue of the 35th European Conference on Object-Oriented Programming (ECOOP 2021). Dagstuhl Artifacts Series (DARTS), Volume 7, Issue 2, pp. 11:1-11:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Article{zhang_et_al:DARTS.7.2.11,
  author =	{Zhang, Weixin and Sun, Yaozhu and Oliveira, Bruno C. d. S.},
  title =	{{Compositional Programming (Artifact)}},
  pages =	{11:1--11:2},
  journal =	{Dagstuhl Artifacts Series},
  ISSN =	{2509-8195},
  year =	{2021},
  volume =	{7},
  number =	{2},
  editor =	{Zhang, Weixin and Sun, Yaozhu and Oliveira, Bruno C. d. S.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DARTS.7.2.11},
  URN =		{urn:nbn:de:0030-drops-140356},
  doi =		{10.4230/DARTS.7.2.11},
  annote =	{Keywords: Expression Problem, Compositionality, Traits}
}
Document
EVF: An Extensible and Expressive Visitor Framework for Programming Language Reuse (Artifact)

Authors: Weixin Zhang and Bruno C. d. S. Oliveira

Published in: DARTS, Volume 3, Issue 2, Special Issue of the 31st European Conference on Object-Oriented Programming (ECOOP 2017)


Abstract
This artifact is based on EVF, an extensible and expressive Java visitor framework. EVF aims at reducing the effort involved in creation and reuse of programming languages. EVF an annotation processor that automatically generate boilerplate ASTs and AST for a given an Object Algebra interface. This artifact contains source code of the case study on "Types and Programming Languages", illustrating how effective EVF is in modularizing programming languages. There is also a microbenchmark in the artifact that shows that EVF has reasonable performance with respect to traditional visitors.

Cite as

Weixin Zhang and Bruno C. d. S. Oliveira. EVF: An Extensible and Expressive Visitor Framework for Programming Language Reuse (Artifact). In Special Issue of the 31st European Conference on Object-Oriented Programming (ECOOP 2017). Dagstuhl Artifacts Series (DARTS), Volume 3, Issue 2, pp. 10:1-10:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@Article{zhang_et_al:DARTS.3.2.10,
  author =	{Zhang, Weixin and Oliveira, Bruno C. d. S.},
  title =	{{EVF: An Extensible and Expressive Visitor Framework for Programming Language Reuse (Artifact)}},
  pages =	{10:1--10:2},
  journal =	{Dagstuhl Artifacts Series},
  ISSN =	{2509-8195},
  year =	{2017},
  volume =	{3},
  number =	{2},
  editor =	{Zhang, Weixin and Oliveira, Bruno C. d. S.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DARTS.3.2.10},
  URN =		{urn:nbn:de:0030-drops-72918},
  doi =		{10.4230/DARTS.3.2.10},
  annote =	{Keywords: visitor pattern, object algebras, modularity, domain-specific languages}
}
Document
EVF: An Extensible and Expressive Visitor Framework for Programming Language Reuse

Authors: Weixin Zhang and Bruno C. d. S. Oliveira

Published in: LIPIcs, Volume 74, 31st European Conference on Object-Oriented Programming (ECOOP 2017)


Abstract
Object Algebras are a design pattern that enables extensibility, modularity, and reuse in mainstream object-oriented languages such as Java. The theoretical foundations of Object Algebras are rooted on Church encodings of datatypes, which are in turn closely related to folds in functional programming. Unfortunately, it is well-known that certain programs are difficult to write, and may incur performance penalties when using Church-encodings/folds. This paper presents EVF: an extensible and expressive Java Visitor framework. The visitors supported by EVF generalize Object Algebras and enable writing programs using a generally recursive style rather than folds. The use of such generally recursive style enables users to more naturally write programs, which would otherwise require contrived workarounds using a fold-like structure. EVF visitors retain the type-safe extensibility of Object Algebras. The key advance in EVF is a novel technique to support extensible external visitors. Extensible external visitors are able to control traversals with direct access to the data structure being traversed, allowing dependent operations to be defined modularly without the need of advanced type system features. To make EVF practical, the framework employs annotations to automatically generate large amounts of boilerplate code related to visitors and traversals. To illustrate the applicability of EVF we conduct a case study, which refactors a large number of non-modular interpreters from the “Types and Programming Languages” (TAPL) book. Using EVF we are able to create a modular software product line (SPL) of the TAPL interpreters, enabling sharing of large portions of code and features. The TAPL software product line contains several modular operations, which would be non-trivial to define with standard Object Algebras.

Cite as

Weixin Zhang and Bruno C. d. S. Oliveira. EVF: An Extensible and Expressive Visitor Framework for Programming Language Reuse. In 31st European Conference on Object-Oriented Programming (ECOOP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 74, pp. 29:1-29:32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.ECOOP.2017.29,
  author =	{Zhang, Weixin and Oliveira, Bruno C. d. S.},
  title =	{{EVF: An Extensible and Expressive Visitor Framework for Programming Language Reuse}},
  booktitle =	{31st European Conference on Object-Oriented Programming (ECOOP 2017)},
  pages =	{29:1--29:32},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-035-4},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{74},
  editor =	{M\"{u}ller, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2017.29},
  URN =		{urn:nbn:de:0030-drops-72749},
  doi =		{10.4230/LIPIcs.ECOOP.2017.29},
  annote =	{Keywords: Visitor Pattern, Object Algebras, Modularity, Domain-Specific Languages}
}

Zhang, Licong

Document
Artifact
The Time-Triggered Wireless Architecture (Artifact)

Authors: Romain Jacob, Licong Zhang, Marco Zimmerling, Jan Beutel, Samarjit Chakraborty, and Lothar Thiele

Published in: DARTS, Volume 6, Issue 1, Special Issue of the 32nd Euromicro Conference on Real-Time Systems (ECRTS 2020)


Abstract
This artifact contains a stable version of all the data and source code required to reproduce or replicate the results presented in The Time-Triggered Wireless Architecture. One GitHub repository serves as main hub for all information related to the artifact. The README file contains detailed instructions for - Running the TTnet model - Compiling and running TTnet - Running the TTW scheduler - Reproducing the data processing - Reproducing the plots

Cite as

Romain Jacob, Licong Zhang, Marco Zimmerling, Jan Beutel, Samarjit Chakraborty, and Lothar Thiele. The Time-Triggered Wireless Architecture (Artifact). In Special Issue of the 32nd Euromicro Conference on Real-Time Systems (ECRTS 2020). Dagstuhl Artifacts Series (DARTS), Volume 6, Issue 1, pp. 5:1-5:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@Article{jacob_et_al:DARTS.6.1.5,
  author =	{Jacob, Romain and Zhang, Licong and Zimmerling, Marco and Beutel, Jan and Chakraborty, Samarjit and Thiele, Lothar},
  title =	{{The Time-Triggered Wireless Architecture (Artifact)}},
  pages =	{5:1--5:3},
  journal =	{Dagstuhl Artifacts Series},
  ISSN =	{2509-8195},
  year =	{2020},
  volume =	{6},
  number =	{1},
  editor =	{Jacob, Romain and Zhang, Licong and Zimmerling, Marco and Beutel, Jan and Chakraborty, Samarjit and Thiele, Lothar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DARTS.6.1.5},
  URN =		{urn:nbn:de:0030-drops-123952},
  doi =		{10.4230/DARTS.6.1.5},
  annote =	{Keywords: Time-triggered architecture, wireless bus, synchronous transmissions}
}
Document
The Time-Triggered Wireless Architecture

Authors: Romain Jacob, Licong Zhang, Marco Zimmerling, Jan Beutel, Samarjit Chakraborty, and Lothar Thiele

Published in: LIPIcs, Volume 165, 32nd Euromicro Conference on Real-Time Systems (ECRTS 2020)


Abstract
Wirelessly interconnected sensors, actuators, and controllers promise greater flexibility, lower installation and maintenance costs, and higher robustness in harsh conditions than wired solutions. However, to facilitate the adoption of wireless communication in cyber-physical systems (CPS), the functional and non-functional properties must be similar to those known from wired architectures. We thus present Time-Triggered Wireless (TTW), a wireless architecture for multi-mode CPS that offers reliable communication with guarantees on end-to-end delays among distributed applications executing on low-cost, low-power embedded devices. We achieve this by exploiting the high reliability and deterministic behavior of a synchronous transmission based communication stack we design, and by coupling the timings of distributed task executions and message exchanges across the wireless network by solving a novel co-scheduling problem. While some of the concepts in TTW have existed for some time and TTW has already been successfully applied for feedback control and coordination of multiple mechanical systems with closed-loop stability guarantees, this paper presents the key algorithmic, scheduling, and networking mechanisms behind TTW, along with their experimental evaluation, which have not been known so far. TTW is open source and ready to use: https://ttw.ethz.ch.

Cite as

Romain Jacob, Licong Zhang, Marco Zimmerling, Jan Beutel, Samarjit Chakraborty, and Lothar Thiele. The Time-Triggered Wireless Architecture. In 32nd Euromicro Conference on Real-Time Systems (ECRTS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 165, pp. 19:1-19:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{jacob_et_al:LIPIcs.ECRTS.2020.19,
  author =	{Jacob, Romain and Zhang, Licong and Zimmerling, Marco and Beutel, Jan and Chakraborty, Samarjit and Thiele, Lothar},
  title =	{{The Time-Triggered Wireless Architecture}},
  booktitle =	{32nd Euromicro Conference on Real-Time Systems (ECRTS 2020)},
  pages =	{19:1--19:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-152-8},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{165},
  editor =	{V\"{o}lp, Marcus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2020.19},
  URN =		{urn:nbn:de:0030-drops-123826},
  doi =		{10.4230/LIPIcs.ECRTS.2020.19},
  annote =	{Keywords: Time-triggered architecture, wireless bus, synchronous transmissions}
}

Zhang, Hailong

Document
Artifact
Differential Privacy for Coverage Analysis of Software Traces (Artifact)

Authors: Yu Hao, Sufian Latif, Hailong Zhang, Raef Bassily, and Atanas Rountev

Published in: DARTS, Volume 7, Issue 2, Special Issue of the 35th European Conference on Object-Oriented Programming (ECOOP 2021)


Abstract
We propose a differentially private coverage analysis for software traces. To demonstrate that it achieves low error and high precision while preserving privacy, we evaluate the analysis on simulated traces for 15 Android apps. The open source implementation of the analysis, which is in Java, and the dataset used in the experiments are released as an artifact. We also provide specific guidance on reproducing the experimental results.

Cite as

Yu Hao, Sufian Latif, Hailong Zhang, Raef Bassily, and Atanas Rountev. Differential Privacy for Coverage Analysis of Software Traces (Artifact). In Special Issue of the 35th European Conference on Object-Oriented Programming (ECOOP 2021). Dagstuhl Artifacts Series (DARTS), Volume 7, Issue 2, pp. 7:1-7:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Article{hao_et_al:DARTS.7.2.7,
  author =	{Hao, Yu and Latif, Sufian and Zhang, Hailong and Bassily, Raef and Rountev, Atanas},
  title =	{{Differential Privacy for Coverage Analysis of Software Traces (Artifact)}},
  pages =	{7:1--7:3},
  journal =	{Dagstuhl Artifacts Series},
  ISSN =	{2509-8195},
  year =	{2021},
  volume =	{7},
  number =	{2},
  editor =	{Hao, Yu and Latif, Sufian and Zhang, Hailong and Bassily, Raef and Rountev, Atanas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DARTS.7.2.7},
  URN =		{urn:nbn:de:0030-drops-140319},
  doi =		{10.4230/DARTS.7.2.7},
  annote =	{Keywords: Trace Profiling, Differential Privacy, Program Analysis}
}
Document
Differential Privacy for Coverage Analysis of Software Traces

Authors: Yu Hao, Sufian Latif, Hailong Zhang, Raef Bassily, and Atanas Rountev

Published in: LIPIcs, Volume 194, 35th European Conference on Object-Oriented Programming (ECOOP 2021)


Abstract
This work considers software execution traces, where a trace is a sequence of run-time events. Each user of a software system collects the set of traces covered by her execution of the software, and reports this set to an analysis server. Our goal is to report the local data of each user in a privacy-preserving manner by employing local differential privacy, a powerful theoretical framework for designing privacy-preserving data analysis. A significant advantage of such analysis is that it offers principled "built-in" privacy with clearly-defined and quantifiable privacy protections. In local differential privacy, the data of an individual user is modified using a local randomizer before being sent to the untrusted analysis server. Based on the randomized information from all users, the analysis server computes, for each trace, an estimate of how many users have covered it. Such analysis requires that the domain of possible traces be defined ahead of time. Unlike in prior related work, here the domain is either infinite or, at best, restricted to many billions of elements. Further, the traces in this domain typically have structure defined by the static properties of the software. To capture these novel aspects, we define the trace domain with the help of context-free grammars. We illustrate this approach with two exemplars: a call chain analysis in which traces are described through a regular language, and an enter/exit trace analysis in which traces are described by a balanced-parentheses context-free language. Randomization over such domains is challenging due to their large size, which makes it impossible to use prior randomization techniques. To solve this problem, we propose to use count sketch, a fixed-size hashing data structure for summarizing frequent items. We develop a version of count sketch for trace analysis and demonstrate its suitability for software execution data. In addition, instead of randomizing separately each contribution to the sketch, we develop a much-faster one-shot randomization of the accumulated sketch data. One important client of the collected information is the identification of high-frequency ("hot") traces. We develop a novel approach to identify hot traces from the collected randomized sketches. A key insight is that the very large domain of possible traces can be efficiently explored for hot traces by using the frequency estimates of a visited trace and its prefixes and suffixes. Our experimental study of both call chain analysis and enter/exit trace analysis indicates that the frequency estimates, as well as the identification of hot traces, achieve high accuracy and high privacy.

Cite as

Yu Hao, Sufian Latif, Hailong Zhang, Raef Bassily, and Atanas Rountev. Differential Privacy for Coverage Analysis of Software Traces. In 35th European Conference on Object-Oriented Programming (ECOOP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 194, pp. 8:1-8:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{hao_et_al:LIPIcs.ECOOP.2021.8,
  author =	{Hao, Yu and Latif, Sufian and Zhang, Hailong and Bassily, Raef and Rountev, Atanas},
  title =	{{Differential Privacy for Coverage Analysis of Software Traces}},
  booktitle =	{35th European Conference on Object-Oriented Programming (ECOOP 2021)},
  pages =	{8:1--8:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-190-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{194},
  editor =	{M{\o}ller, Anders and Sridharan, Manu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2021.8},
  URN =		{urn:nbn:de:0030-drops-140513},
  doi =		{10.4230/LIPIcs.ECOOP.2021.8},
  annote =	{Keywords: Trace Profiling, Differential Privacy, Program Analysis}
}

Zhang, Mingbo

Document
Decomposition of Differential Polynomials

Authors: Xiao-Shan Gao and Mingbo Zhang

Published in: Dagstuhl Seminar Proceedings, Volume 6271, Challenges in Symbolic Computation Software (2006)


Abstract
We present an algorithm to decompose nonlinear differential polynomials in one variable and with rational functions as coefficients. The algorithm is implemented in Maple for the {em constant field} case. The program can be used to decompose differential polynomials with more than one thousand terms effectively.

Cite as

Xiao-Shan Gao and Mingbo Zhang. Decomposition of Differential Polynomials. In Challenges in Symbolic Computation Software. Dagstuhl Seminar Proceedings, Volume 6271, pp. 1-10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{gao_et_al:DagSemProc.06271.9,
  author =	{Gao, Xiao-Shan and Zhang, Mingbo},
  title =	{{Decomposition of Differential Polynomials}},
  booktitle =	{Challenges in Symbolic Computation Software},
  pages =	{1--10},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{6271},
  editor =	{Wolfram Decker and Mike Dewar and Erich Kaltofen and Stephen Watt},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.06271.9},
  URN =		{urn:nbn:de:0030-drops-7726},
  doi =		{10.4230/DagSemProc.06271.9},
  annote =	{Keywords: Decomposition, differential polynomial, difference polynomial}
}

Zhang, Lijun

Document
Complete Volume
LIPIcs, Volume 118, CONCUR'18, Complete Volume

Authors: Sven Schewe and Lijun Zhang

Published in: LIPIcs, Volume 118, 29th International Conference on Concurrency Theory (CONCUR 2018)


Abstract
LIPIcs, Volume 118, CONCUR'18, Complete Volume

Cite as

29th International Conference on Concurrency Theory (CONCUR 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 118, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@Proceedings{schewe_et_al:LIPIcs.CONCUR.2018,
  title =	{{LIPIcs, Volume 118, CONCUR'18, Complete Volume}},
  booktitle =	{29th International Conference on Concurrency Theory (CONCUR 2018)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-087-3},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{118},
  editor =	{Schewe, Sven and Zhang, Lijun},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2018},
  URN =		{urn:nbn:de:0030-drops-97431},
  doi =		{10.4230/LIPIcs.CONCUR.2018},
  annote =	{Keywords: Theory of Computation}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Sven Schewe and Lijun Zhang

Published in: LIPIcs, Volume 118, 29th International Conference on Concurrency Theory (CONCUR 2018)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

29th International Conference on Concurrency Theory (CONCUR 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 118, pp. 0:i-0:xxi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{schewe_et_al:LIPIcs.CONCUR.2018.0,
  author =	{Schewe, Sven and Zhang, Lijun},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{29th International Conference on Concurrency Theory (CONCUR 2018)},
  pages =	{0:i--0:xxi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-087-3},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{118},
  editor =	{Schewe, Sven and Zhang, Lijun},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2018.0},
  URN =		{urn:nbn:de:0030-drops-95386},
  doi =		{10.4230/LIPIcs.CONCUR.2018.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Lazy Probabilistic Model Checking without Determinisation

Authors: Ernst Moritz Hahn, Guangyuan Li, Sven Schewe, Andrea Turrini, and Lijun Zhang

Published in: LIPIcs, Volume 42, 26th International Conference on Concurrency Theory (CONCUR 2015)


Abstract
The bottleneck in the quantitative analysis of Markov chains and Markov decision processes against specifications given in LTL or as some form of nondeterministic Büchi automata is the inclusion of a determinisation step of the automaton under consideration. In this paper, we show that full determinisation can be avoided: subset and breakpoint constructions suffice. We have implemented our approach - both explicit and symbolic versions - in a prototype tool. Our experiments show that our prototype can compete with mature tools like PRISM.

Cite as

Ernst Moritz Hahn, Guangyuan Li, Sven Schewe, Andrea Turrini, and Lijun Zhang. Lazy Probabilistic Model Checking without Determinisation. In 26th International Conference on Concurrency Theory (CONCUR 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 42, pp. 354-367, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{hahn_et_al:LIPIcs.CONCUR.2015.354,
  author =	{Hahn, Ernst Moritz and Li, Guangyuan and Schewe, Sven and Turrini, Andrea and Zhang, Lijun},
  title =	{{Lazy Probabilistic Model Checking without Determinisation}},
  booktitle =	{26th International Conference on Concurrency Theory (CONCUR 2015)},
  pages =	{354--367},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-91-0},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{42},
  editor =	{Aceto, Luca and de Frutos Escrig, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2015.354},
  URN =		{urn:nbn:de:0030-drops-53918},
  doi =		{10.4230/LIPIcs.CONCUR.2015.354},
  annote =	{Keywords: Markov decision processes, model checking, PLTL, determinisation}
}
Document
Efficient Approximation of Optimal Control for Continuous-Time Markov Games

Authors: John Fearnley, Markus Rabe, Sven Schewe, and Lijun Zhang

Published in: LIPIcs, Volume 13, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)


Abstract
We study the time-bounded reachability problem for continuous time Markov decision processes (CTMDPs) and games (CTMGs). Existing techniques for this problem use discretization techniques to break time into discrete intervals, and optimal control is approximated for each interval separately. Current techniques provide an accuracy of O(\epsilon^2) on each interval, which leads to an infeasibly large number of intervals. We propose a sequence of approximations that achieve accuracies of O(\epsilon^3), O(\epsilon^4), and O(\epsilon^5), that allow us to drastically reduce the number of intervals that are considered. For CTMDPs, the resulting algorithms are comparable to the heuristic approach given by Buckholz and Schulz, while also being theoretically justified. All of our results generalise to CTMGs, where our results yield the first practically implementable algorithms for this problem. We also provide positional strategies for both players that achieve similar error bounds.

Cite as

John Fearnley, Markus Rabe, Sven Schewe, and Lijun Zhang. Efficient Approximation of Optimal Control for Continuous-Time Markov Games. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 399-410, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{fearnley_et_al:LIPIcs.FSTTCS.2011.399,
  author =	{Fearnley, John and Rabe, Markus and Schewe, Sven and Zhang, Lijun},
  title =	{{Efficient Approximation of Optimal Control for Continuous-Time Markov Games}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{399--410},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.399},
  URN =		{urn:nbn:de:0030-drops-33547},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.399},
  annote =	{Keywords: Continuous time Markov decision processes and games, discretisation}
}
Document
07101 Working Group Report – Performance Measures Other Than Time

Authors: Lucia Cloth, Pepijn Crouzen, Matthias Fruth, Tingting Han, David N. Jansen, Mark Kattenbelt, Gerard J. M. Smit, and Lijun Zhang

Published in: Dagstuhl Seminar Proceedings, Volume 7101, Quantitative Aspects of Embedded Systems (2007)


Abstract
This presentation shows a few possible performance measures that might be interesting and possible evaluation methods.

Cite as

Lucia Cloth, Pepijn Crouzen, Matthias Fruth, Tingting Han, David N. Jansen, Mark Kattenbelt, Gerard J. M. Smit, and Lijun Zhang. 07101 Working Group Report – Performance Measures Other Than Time. In Quantitative Aspects of Embedded Systems. Dagstuhl Seminar Proceedings, Volume 7101, pp. 1-2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2007)


Copy BibTex To Clipboard

@InProceedings{cloth_et_al:DagSemProc.07101.3,
  author =	{Cloth, Lucia and Crouzen, Pepijn and Fruth, Matthias and Han, Tingting and Jansen, David N. and Kattenbelt, Mark and Smit, Gerard J. M. and Zhang, Lijun},
  title =	{{07101 Working Group Report – Performance Measures Other Than Time}},
  booktitle =	{Quantitative Aspects of Embedded Systems},
  pages =	{1--2},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2007},
  volume =	{7101},
  editor =	{Boudewijn Haverkort and Joost-Pieter Katoen and Lothar Thiele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.07101.3},
  URN =		{urn:nbn:de:0030-drops-11396},
  doi =		{10.4230/DagSemProc.07101.3},
  annote =	{Keywords: }
}

Zhang, Dongmo

Document
The Logic of Bargaining

Authors: Dongmo Zhang

Published in: Dagstuhl Seminar Proceedings, Volume 7351, Formal Models of Belief Change in Rational Agents (2007)


Abstract
This paper reexamines the game-theoretic bargaining theory from logic and Artificial Intelligence perspectives. We present an axiomatic characterization of the logical solutions to bargaining problems. A bargaining situation is described in propositional logic with numerical representation of bargainers' preferences. A solution to the n-person bargaining problems is proposed based on the maxmin rule over the degrees of bargainers' satisfaction. The solution is uniquely characterized by four axioms collective rationality, scale invariance, symmetry and mutually comparable monotonicity in conjunction with three other fundamental assumptions individual rationality, consistency and comprehensiveness. The Pareto efficient solutions are characterized by the axioms scale invariance, Pareto optimality and restricted mutually comparable monotonicity along with the basic assumptions. The relationships of these axioms and assumptions and their links to belief revision postulates and game theory axioms are discussed. The framework would help us to identify the logical reasoning behind bargaining processes and would initiate a new methodology of bargaining analysis.

Cite as

Dongmo Zhang. The Logic of Bargaining. In Formal Models of Belief Change in Rational Agents. Dagstuhl Seminar Proceedings, Volume 7351, pp. 1-34, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2007)


Copy BibTex To Clipboard

@InProceedings{zhang:DagSemProc.07351.21,
  author =	{Zhang, Dongmo},
  title =	{{The Logic of Bargaining}},
  booktitle =	{Formal Models of Belief Change in Rational Agents},
  pages =	{1--34},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2007},
  volume =	{7351},
  editor =	{Giacomo Bonanno and James Delgrande and J\'{e}r\^{o}me Lang and Hans Rott},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.07351.21},
  URN =		{urn:nbn:de:0030-drops-12031},
  doi =		{10.4230/DagSemProc.07351.21},
  annote =	{Keywords: Bargaining theory, belief revision, game theory}
}

Zhang, Dapeng

Document
Learning with Table Soccer

Authors: Dapeng Zhang

Published in: Dagstuhl Seminar Proceedings, Volume 8372, Computer Science in Sport - Mission and Methods (2008)


Abstract
Our research focuses on learning approaches with robot KiRo. KiRo is a table soccer robot which can challenge even advanced human players. Previously, we developed a method using learning by imitation, by which KiRo can automatically acquire the demonstrated actions. Recently, we constructed a game-recorder which collects data from the human-played games. The in-process work is about explaining the recorded data, which is to classify and to evaluate human's skills. A brief overview of the previous work is addressed, and the perspective is discussed.

Cite as

Dapeng Zhang. Learning with Table Soccer. In Computer Science in Sport - Mission and Methods. Dagstuhl Seminar Proceedings, Volume 8372, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{zhang:DagSemProc.08372.6,
  author =	{Zhang, Dapeng},
  title =	{{Learning with Table Soccer}},
  booktitle =	{Computer Science in Sport - Mission and Methods},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8372},
  editor =	{Arnold Baca and Martin Lames and Keith Lyons and Bernhard Nebel and Josef Wiemeyer},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08372.6},
  URN =		{urn:nbn:de:0030-drops-16839},
  doi =		{10.4230/DagSemProc.08372.6},
  annote =	{Keywords: Table Soccer Robot, Learning}
}

Zhang, Yingqian

Document
Creating incentives to prevent execution failures: an extension of VCG mechanism

Authors: Yingqian Zhang and Mathijs de Weerdt

Published in: Dagstuhl Seminar Proceedings, Volume 8461, Planning in Multiagent Systems (2009)


Abstract
When information or control in a multiagent planning system is private to the agents, they may misreport this information or refuse to execute an agreed outcome, in order to change the resulting end state of such a system to their benefit. In some domains this may result in an execution failure. We show that in such settings VCG mechanisms lose truthfulness, and that the utility of truthful agents can become negative when using VCG payments (i.e., VCG is not strongly individually rational). To deal with this problem, we introduce an extended payment structure which takes into account the actual execution of the promised outcome. We show that this extended mechanism can guarantee a nonnegative utility and is (i) incentive compatible in a Nash equilibrium, and (ii) incentive compatible in dominant strategies if and only if all agents can be verified during execution.

Cite as

Yingqian Zhang and Mathijs de Weerdt. Creating incentives to prevent execution failures: an extension of VCG mechanism. In Planning in Multiagent Systems. Dagstuhl Seminar Proceedings, Volume 8461, pp. 1-18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:DagSemProc.08461.4,
  author =	{Zhang, Yingqian and de Weerdt, Mathijs},
  title =	{{Creating incentives to prevent execution failures: an extension of VCG mechanism}},
  booktitle =	{Planning in Multiagent Systems},
  pages =	{1--18},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{8461},
  editor =	{J\"{u}rgen Dix and Edmund H. Durfee and Cees Witteveen},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08461.4},
  URN =		{urn:nbn:de:0030-drops-18705},
  doi =		{10.4230/DagSemProc.08461.4},
  annote =	{Keywords: Mechanism design, multiagent planning}
}

Zhang, Lixia

Document
09102 Abstracts Collection – Perspectives Workshop: Naming and Addressing in a Future Internet

Authors: Jari Arkko, Marcelo Bagnulo Braun, Scott Brim, Lars Eggert, Christian Vogt, and Lixia Zhang

Published in: Dagstuhl Seminar Proceedings, Volume 9102, Perspectives Workshop: Naming and Addressing in a Future Internet (2011)


Abstract
From 01.03. to 04.03.2009, the Perspectives Workshop 09102 ``Perspectives Workshop: Naming and Addressing in a Future Internet '' was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Jari Arkko, Marcelo Bagnulo Braun, Scott Brim, Lars Eggert, Christian Vogt, and Lixia Zhang. 09102 Abstracts Collection – Perspectives Workshop: Naming and Addressing in a Future Internet. In Perspectives Workshop: Naming and Addressing in a Future Internet. Dagstuhl Seminar Proceedings, Volume 9102, pp. 1-4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{arkko_et_al:DagSemProc.09102.1,
  author =	{Arkko, Jari and Bagnulo Braun, Marcelo and Brim, Scott and Eggert, Lars and Vogt, Christian and Zhang, Lixia},
  title =	{{09102 Abstracts Collection – Perspectives Workshop: Naming and Addressing in a Future Internet}},
  booktitle =	{Perspectives Workshop: Naming and Addressing in a Future Internet},
  pages =	{1--4},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2011},
  volume =	{9102},
  editor =	{Jari Arkko and Marcelo Bagnulo Braun and Scott Brim and Lars Eggert and Christian Vogt and Lixia Zhang},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.09102.1},
  URN =		{urn:nbn:de:0030-drops-32792},
  doi =		{10.4230/DagSemProc.09102.1},
  annote =	{Keywords: }
}
Document
09102 Report – Perspectives Workshop: Naming and Addressing in a Future Internet

Authors: Jari Arkko, Marcelo Bagnulo Braun, Scott Brim, Lars Eggert, Christian Vogt, and Lixia Zhang

Published in: Dagstuhl Seminar Proceedings, Volume 9102, Perspectives Workshop: Naming and Addressing in a Future Internet (2011)


Abstract
This article summarizes the presentations and discussions during a workshop on naming and addressing in a future Internet that was held in March 2009 at "Schloß Dagstuhl" in Germany. The aim of the workshop was to explore the different roles that names have in an internetwork architecture, as well as attempt to come to some agreements on what characteristics are important or desirable for names in these various roles. The goal of this report is to attempt a faithful reflection of the workshop itself, presenting the different views, positions and issues discussed at the workshop in a structured way.

Cite as

Jari Arkko, Marcelo Bagnulo Braun, Scott Brim, Lars Eggert, Christian Vogt, and Lixia Zhang. 09102 Report – Perspectives Workshop: Naming and Addressing in a Future Internet. In Perspectives Workshop: Naming and Addressing in a Future Internet. Dagstuhl Seminar Proceedings, Volume 9102, pp. 1-3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{arkko_et_al:DagSemProc.09102.2,
  author =	{Arkko, Jari and Bagnulo Braun, Marcelo and Brim, Scott and Eggert, Lars and Vogt, Christian and Zhang, Lixia},
  title =	{{09102 Report – Perspectives Workshop: Naming and Addressing in a Future Internet}},
  booktitle =	{Perspectives Workshop: Naming and Addressing in a Future Internet},
  pages =	{1--3},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2011},
  volume =	{9102},
  editor =	{Jari Arkko and Marcelo Bagnulo Braun and Scott Brim and Lars Eggert and Christian Vogt and Lixia Zhang},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.09102.2},
  URN =		{urn:nbn:de:0030-drops-32785},
  doi =		{10.4230/DagSemProc.09102.2},
  annote =	{Keywords: }
}

Zhang, Ting

Document
A Tight Lower Bound for Streett Complementation

Authors: Yang Cai and Ting Zhang

Published in: LIPIcs, Volume 13, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)


Abstract
Finite automata on infinite words (omega-automata) proved to be a powerful weapon for modeling and reasoning infinite behaviors of reactive systems. Complementation of omega-automata is crucial in many of these applications. But the problem is non-trivial; even after extensive study during the past two decades, we still have an important type of omega-automata, namely Streett automata, for which the gap between the current best lower bound 2^(Omega(n lg nk)) and upper bound 2^(O (nk lg nk)) is substantial, for the Streett index size k can be exponential in the number of states n. In a previous work we showed a construction for complementing Streett automata with the upper bound 2^(O(n lg n+nk lg k)) for k = O(n) and 2^(O(n^2 lg n)) for k = omega(n). In this paper we establish a matching lower bound 2^(Omega (n lg n+nk lg k)) for k = O(n) and 2^(Omega (n^2 lg n)) for k = omega(n), and therefore showing that the construction is asymptotically optimal with respect to the ^(Theta(.)) notation.

Cite as

Yang Cai and Ting Zhang. A Tight Lower Bound for Streett Complementation. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 339-350, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.FSTTCS.2011.339,
  author =	{Cai, Yang and Zhang, Ting},
  title =	{{A Tight Lower Bound for Streett Complementation}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{339--350},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.339},
  URN =		{urn:nbn:de:0030-drops-33474},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.339},
  annote =	{Keywords: omega-automata, Streett automata, complementation, lower bounds}
}
Document
Tight Upper Bounds for Streett and Parity Complementation

Authors: Yang Cai and Ting Zhang

Published in: LIPIcs, Volume 12, Computer Science Logic (CSL'11) - 25th International Workshop/20th Annual Conference of the EACSL (2011)


Abstract
Complementation of finite automata on infinite words is not only a fundamental problem in automata theory, but also serves as a cornerstone for solving numerous decision problems in mathematical logic, model-checking, program analysis and verification. For Streett complementation, a significant gap exists between the current lower bound 2^{Omega(n*log(n*k))} and upper bound 2^{O(n*k*log(n*k))}, where n is the state size, k is the number of Streett pairs, and k can be as large as 2^{n}. Determining the complexity of Streett complementation has been an open question since the late 80's. In this paper we show a complementation construction with upper bound 2^{O(n*log(n)+n*k*log(k))} for k=O(n) and 2^{O(n^{2}*log(n))} for k=Omega(n), which matches well the lower bound obtained in the paper arXiv:1102.2963. We also obtain a tight upper bound 2^{O(n*log(n))} for parity complementation.

Cite as

Yang Cai and Ting Zhang. Tight Upper Bounds for Streett and Parity Complementation. In Computer Science Logic (CSL'11) - 25th International Workshop/20th Annual Conference of the EACSL. Leibniz International Proceedings in Informatics (LIPIcs), Volume 12, pp. 112-128, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.CSL.2011.112,
  author =	{Cai, Yang and Zhang, Ting},
  title =	{{Tight Upper Bounds for Streett and Parity Complementation}},
  booktitle =	{Computer Science Logic (CSL'11) - 25th International Workshop/20th Annual Conference of the EACSL},
  pages =	{112--128},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-32-3},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{12},
  editor =	{Bezem, Marc},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CSL.2011.112},
  URN =		{urn:nbn:de:0030-drops-32269},
  doi =		{10.4230/LIPIcs.CSL.2011.112},
  annote =	{Keywords: Streett automata, omega-automata, parity automata, complementation, upper bounds}
}

Zhang, Zeyu

Document
Characterizing Demand Graphs for (Fixed-Parameter) Shallow-Light Steiner Network

Authors: Amy Babay, Michael Dinitz, and Zeyu Zhang

Published in: LIPIcs, Volume 122, 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018)


Abstract
We consider the Shallow-Light Steiner Network problem from a fixed-parameter perspective. Given a graph G, a distance bound L, and p pairs of vertices (s_1,t_1),...,(s_p,t_p), the objective is to find a minimum-cost subgraph G' such that s_i and t_i have distance at most L in G' (for every i in [p]). Our main result is on the fixed-parameter tractability of this problem for parameter p. We exactly characterize the demand structures that make the problem "easy", and give FPT algorithms for those cases. In all other cases, we show that the problem is W[1]-hard. We also extend our results to handle general edge lengths and costs, precisely characterizing which demands allow for good FPT approximation algorithms and which demands remain W[1]-hard even to approximate.

Cite as

Amy Babay, Michael Dinitz, and Zeyu Zhang. Characterizing Demand Graphs for (Fixed-Parameter) Shallow-Light Steiner Network. In 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 122, pp. 33:1-33:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{babay_et_al:LIPIcs.FSTTCS.2018.33,
  author =	{Babay, Amy and Dinitz, Michael and Zhang, Zeyu},
  title =	{{Characterizing Demand Graphs for (Fixed-Parameter) Shallow-Light Steiner Network}},
  booktitle =	{38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018)},
  pages =	{33:1--33:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-093-4},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{122},
  editor =	{Ganguly, Sumit and Pandya, Paritosh},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2018.33},
  URN =		{urn:nbn:de:0030-drops-99329},
  doi =		{10.4230/LIPIcs.FSTTCS.2018.33},
  annote =	{Keywords: fixed-parameter tractable, network design, shallow-light steiner network, demand graphs}
}
Document
Brief Announcement
Brief Announcement: Characterizing Demand Graphs for (Fixed-Parameter) Shallow-Light Steiner Network

Authors: Amy Babay, Michael Dinitz, and Zeyu Zhang

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
We consider the Shallow-Light Steiner Network problem from a fixed-parameter perspective. Given a graph G, a distance bound L, and p pairs of vertices {(s_i,t_i)}_{i in [p]}, the objective is to find a minimum-cost subgraph G' such that s_i and t_i have distance at most L in G' (for every i in [p]). Our main result is on the fixed-parameter tractability of this problem for parameter p. We exactly characterize the demand structures that make the problem "easy", and give FPT algorithms for those cases. In all other cases, we show that the problem is W[1]-hard. We also extend our results to handle general edge lengths and costs, precisely characterizing which demands allow for good FPT approximation algorithms and which demands remain W[1]-hard even to approximate.

Cite as

Amy Babay, Michael Dinitz, and Zeyu Zhang. Brief Announcement: Characterizing Demand Graphs for (Fixed-Parameter) Shallow-Light Steiner Network. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 104:1-104:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{babay_et_al:LIPIcs.ICALP.2018.104,
  author =	{Babay, Amy and Dinitz, Michael and Zhang, Zeyu},
  title =	{{Brief Announcement: Characterizing Demand Graphs for (Fixed-Parameter) Shallow-Light Steiner Network}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{104:1--104:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.104},
  URN =		{urn:nbn:de:0030-drops-91083},
  doi =		{10.4230/LIPIcs.ICALP.2018.104},
  annote =	{Keywords: Shallow-Light, Steiner Network, Fixed-Parameter Tractability}
}
Document
Approximating Approximate Distance Oracles

Authors: Michael Dinitz and Zeyu Zhang

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
Given a finite metric space (V,d), an approximate distance oracle is a data structure which, when queried on two points u,v \in V, returns an approximation to the the actual distance between u and v which is within some bounded stretch factor of the true distance. There has been significant work on the tradeoff between the important parameters of approximate distance oracles (and in particular between the size, stretch, and query time), but in this paper we take a different point of view, that of per-instance optimization. If we are given an particular input metric space and stretch bound, can we find the smallest possible approximate distance oracle for that particular input? Since this question is not even well-defined, we restrict our attention to well-known classes of approximate distance oracles, and study whether we can optimize over those classes. In particular, we give an O(\log n)-approximation to the problem of finding the smallest stretch 3 Thorup-Zwick distance oracle, as well as the problem of finding the smallest P\v{a}tra\c{s}cu-Roditty distance oracle. We also prove a matching \Omega(\log n) lower bound for both problems, and an \Omega(n^{\frac{1}{k}-\frac{1}{2^{k-1}}}) integrality gap for the more general stretch (2k-1) Thorup-Zwick distance oracle. We also consider the problem of approximating the best TZ or PR approximate distance oracle with outliers, and show that more advanced techniques (SDP relaxations in particular) allow us to optimize even in the presence of outliers.

Cite as

Michael Dinitz and Zeyu Zhang. Approximating Approximate Distance Oracles. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 52:1-52:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{dinitz_et_al:LIPIcs.ITCS.2017.52,
  author =	{Dinitz, Michael and Zhang, Zeyu},
  title =	{{Approximating Approximate Distance Oracles}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{52:1--52:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.52},
  URN =		{urn:nbn:de:0030-drops-81692},
  doi =		{10.4230/LIPIcs.ITCS.2017.52},
  annote =	{Keywords: distance oracles, approximation algorithms}
}
Document
Ranking with Diverse Intents and Correlated Contents

Authors: Jian Li and Zeyu Zhang

Published in: LIPIcs, Volume 24, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2013)


Abstract
We consider the following document ranking problem: We have a collection of documents, each containing some topics (e.g. sports, politics, economics). We also have a set of users with diverse interests. Assume that user u is interested in a subset I_u of topics. Each user u is also associated with a positive integer K_u, which indicates that u can be satisfied by any K_u topics in I_u. Each document s contains information for a subset C_s of topics. The objective is to pick one document at a time such that the average satisfying time is minimized, where a user's satisfying time is the first time that at least K_u topics in I_u are covered in the documents selected so far. Our main result is an O(rho)-approximation algorithm for the problem, where rho is the algorithmic integrality gap of the linear programming relaxation of the set cover instance defined by the documents and topics. This result generalizes the constant approximations for generalized min-sum set cover and ranking with unrelated intents and the logarithmic approximation for the problem of ranking with submodular valuations (when the submodular function is the coverage function), and can be seen as an interpolation between these results. We further extend our model to the case when each user may be interested in more than one sets of topics and when the user's valuation function is XOS, and obtain similar results for these models.

Cite as

Jian Li and Zeyu Zhang. Ranking with Diverse Intents and Correlated Contents. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2013). Leibniz International Proceedings in Informatics (LIPIcs), Volume 24, pp. 351-362, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.FSTTCS.2013.351,
  author =	{Li, Jian and Zhang, Zeyu},
  title =	{{Ranking with Diverse Intents and Correlated Contents}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2013)},
  pages =	{351--362},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-64-4},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{24},
  editor =	{Seth, Anil and Vishnoi, Nisheeth K.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2013.351},
  URN =		{urn:nbn:de:0030-drops-43856},
  doi =		{10.4230/LIPIcs.FSTTCS.2013.351},
  annote =	{Keywords: Approximation Algorithm, Diversification, min-sum Set Cover}
}

Zhang, Chihao

Document
Track A: Algorithms, Complexity and Games
A Perfect Sampler for Hypergraph Independent Sets

Authors: Guoliang Qiu, Yanheng Wang, and Chihao Zhang

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
The problem of uniformly sampling hypergraph independent sets is revisited. We design an efficient perfect sampler for the problem under a similar condition of the asymmetric Lovász local lemma. When specialized to d-regular k-uniform hypergraphs on n vertices, our sampler terminates in expected O(n log n) time provided d ≤ c⋅ 2^{k/2} where c > 0 is a constant, matching the rapid mixing condition for Glauber dynamics in Hermon, Sly and Zhang [Hermon et al., 2019]. The analysis of our algorithm is simple and clean.

Cite as

Guoliang Qiu, Yanheng Wang, and Chihao Zhang. A Perfect Sampler for Hypergraph Independent Sets. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 103:1-103:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{qiu_et_al:LIPIcs.ICALP.2022.103,
  author =	{Qiu, Guoliang and Wang, Yanheng and Zhang, Chihao},
  title =	{{A Perfect Sampler for Hypergraph Independent Sets}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{103:1--103:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.103},
  URN =		{urn:nbn:de:0030-drops-164442},
  doi =		{10.4230/LIPIcs.ICALP.2022.103},
  annote =	{Keywords: Coupling from the past, Markov chains, Hypergraph independent sets}
}
Document
Sampling in Potts Model on Sparse Random Graphs

Authors: Yitong Yin and Chihao Zhang

Published in: LIPIcs, Volume 60, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)


Abstract
We study the problem of sampling almost uniform proper q-colorings in sparse Erdos-Renyi random graphs G(n,d/n), a research initiated by Dyer, Flaxman, Frieze and Vigoda [Dyer et al., RANDOM STRUCT ALGOR, 2006]. We obtain a fully polynomial time almost uniform sampler (FPAUS) for the problem provided q>3d+4, improving the current best bound q>5.5d [Efthymiou, SODA, 2014]. Our sampling algorithm works for more generalized models and broader family of sparse graphs. It is an efficient sampler (in the same sense of FPAUS) for anti-ferromagnetic Potts model with activity 0<=b<1 on G(n,d/n) provided q>3(1-b)d+4. We further identify a family of sparse graphs to which all these results can be extended. This family of graphs is characterized by the notion of contraction function, which is a new measure of the average degree in graphs.

Cite as

Yitong Yin and Chihao Zhang. Sampling in Potts Model on Sparse Random Graphs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 60, pp. 47:1-47:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{yin_et_al:LIPIcs.APPROX-RANDOM.2016.47,
  author =	{Yin, Yitong and Zhang, Chihao},
  title =	{{Sampling in Potts Model on Sparse Random Graphs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)},
  pages =	{47:1--47:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-018-7},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{60},
  editor =	{Jansen, Klaus and Mathieu, Claire and Rolim, Jos\'{e} D. P. and Umans, Chris},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2016.47},
  URN =		{urn:nbn:de:0030-drops-66706},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2016.47},
  annote =	{Keywords: Potts model, Sampling, Random Graph, Approximation Algorithm}
}
Document
FPTAS for Hardcore and Ising Models on Hypergraphs

Authors: Pinyan Lu, Kuan Yang, and Chihao Zhang

Published in: LIPIcs, Volume 47, 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)


Abstract
Hardcore and Ising models are two most important families of two state spin systems in statistic physics. Partition function of spin systems is the center concept in statistic physics which connects microscopic particles and their interactions with their macroscopic and statistical properties of materials such as energy, entropy, ferromagnetism, etc. If each local interaction of the system involves only two particles, the system can be described by a graph. In this case, fully polynomial-time approximation scheme (FPTAS) for computing the partition function of both hardcore and anti-ferromagnetic Ising model was designed up to the uniqueness condition of the system. These result are the best possible since approximately computing the partition function beyond this threshold is NP-hard. In this paper, we generalize these results to general physics systems, where each local interaction may involves multiple particles. Such systems are described by hypergraphs. For hardcore model, we also provide FPTAS up to the uniqueness condition, and for anti-ferromagnetic Ising model, we obtain FPTAS under a slightly stronger condition.

Cite as

Pinyan Lu, Kuan Yang, and Chihao Zhang. FPTAS for Hardcore and Ising Models on Hypergraphs. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 51:1-51:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{lu_et_al:LIPIcs.STACS.2016.51,
  author =	{Lu, Pinyan and Yang, Kuan and Zhang, Chihao},
  title =	{{FPTAS for Hardcore and Ising Models on Hypergraphs}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{51:1--51:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.51},
  URN =		{urn:nbn:de:0030-drops-57526},
  doi =		{10.4230/LIPIcs.STACS.2016.51},
  annote =	{Keywords: hard-core model, ising model, hypergraph, spatial mixing, correlation decay}
}
Document
The Complexity of Ferromagnetic Two-spin Systems with External Fields

Authors: Jingcheng Liu, Pinyan Lu, and Chihao Zhang

Published in: LIPIcs, Volume 28, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)


Abstract
We study the approximability of computing the partition function for ferromagnetic two-state spin systems. The remarkable algorithm by Jerrum and Sinclair showed that there is a fully polynomial-time randomized approximation scheme (FPRAS) for the special ferromagnetic Ising model with any given uniform external field. Later, Goldberg and Jerrum proved that it is #BIS-hard for Ising model if we allow inconsistent external fields on different nodes. In contrast to these two results, we prove that for any ferromagnetic two-state spin systems except the Ising model, there exists a threshold for external fields beyond which the problem is #BIS-hard, even if the external field is uniform.

Cite as

Jingcheng Liu, Pinyan Lu, and Chihao Zhang. The Complexity of Ferromagnetic Two-spin Systems with External Fields. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 843-856, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.APPROX-RANDOM.2014.843,
  author =	{Liu, Jingcheng and Lu, Pinyan and Zhang, Chihao},
  title =	{{The Complexity of Ferromagnetic Two-spin Systems with External Fields}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{843--856},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.843},
  URN =		{urn:nbn:de:0030-drops-47428},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.843},
  annote =	{Keywords: Spin System, #BIS-hard, FPRAS}
}

Zhang, Qin

Document
Communication Complexity of Approximate Matching in Distributed Graphs

Authors: Zengfeng Huang, Bozidar Radunovic, Milan Vojnovic, and Qin Zhang

Published in: LIPIcs, Volume 30, 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)


Abstract
In this paper we consider the communication complexity of approximation algorithms for maximum matching in a graph in the message-passing model of distributed computation. The input graph consists of n vertices and edges partitioned over a set of k sites. The output is an \alpha-approximate maximum matching in the input graph which has to be reported by one of the sites. We show a lower bound on the communication complexity of \Omega(\alpha^2 k n) and show that it is tight up to poly-logarithmic factors. This lower bound also applies to other combinatorial problems on graphs in the message-passing computation model, including max-flow and graph sparsification.

Cite as

Zengfeng Huang, Bozidar Radunovic, Milan Vojnovic, and Qin Zhang. Communication Complexity of Approximate Matching in Distributed Graphs. In 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 30, pp. 460-473, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.STACS.2015.460,
  author =	{Huang, Zengfeng and Radunovic, Bozidar and Vojnovic, Milan and Zhang, Qin},
  title =	{{Communication Complexity of Approximate Matching in Distributed Graphs}},
  booktitle =	{32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)},
  pages =	{460--473},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-78-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{30},
  editor =	{Mayr, Ernst W. and Ollinger, Nicolas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2015.460},
  URN =		{urn:nbn:de:0030-drops-49348},
  doi =		{10.4230/LIPIcs.STACS.2015.460},
  annote =	{Keywords: approximate maximum matching, distributed computation, communication complexity}
}

Zhang, Shengyu

Document
On the Cut Dimension of a Graph

Authors: Troy Lee, Tongyang Li, Miklos Santha, and Shengyu Zhang

Published in: LIPIcs, Volume 200, 36th Computational Complexity Conference (CCC 2021)


Abstract
Let G = (V,w) be a weighted undirected graph with m edges. The cut dimension of G is the dimension of the span of the characteristic vectors of the minimum cuts of G, viewed as vectors in {0,1}^m. For every n ≥ 2 we show that the cut dimension of an n-vertex graph is at most 2n-3, and construct graphs realizing this bound. The cut dimension was recently defined by Graur et al. [Andrei Graur et al., 2020], who show that the maximum cut dimension of an n-vertex graph is a lower bound on the number of cut queries needed by a deterministic algorithm to solve the minimum cut problem on n-vertex graphs. For every n ≥ 2, Graur et al. exhibit a graph on n vertices with cut dimension at least 3n/2 -2, giving the first lower bound larger than n on the deterministic cut query complexity of computing mincut. We observe that the cut dimension is even a lower bound on the number of linear queries needed by a deterministic algorithm to solve mincut, where a linear query can ask any vector x ∈ ℝ^{binom(n,2)} and receives the answer w^T x. Our results thus show a lower bound of 2n-3 on the number of linear queries needed by a deterministic algorithm to solve minimum cut on n-vertex graphs, and imply that one cannot show a lower bound larger than this via the cut dimension. We further introduce a generalization of the cut dimension which we call the 𝓁₁-approximate cut dimension. The 𝓁₁-approximate cut dimension is also a lower bound on the number of linear queries needed by a deterministic algorithm to compute minimum cut. It is always at least as large as the cut dimension, and we construct an infinite family of graphs on n = 3k+1 vertices with 𝓁₁-approximate cut dimension 2n-2, showing that it can be strictly larger than the cut dimension.

Cite as

Troy Lee, Tongyang Li, Miklos Santha, and Shengyu Zhang. On the Cut Dimension of a Graph. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 15:1-15:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lee_et_al:LIPIcs.CCC.2021.15,
  author =	{Lee, Troy and Li, Tongyang and Santha, Miklos and Zhang, Shengyu},
  title =	{{On the Cut Dimension of a Graph}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{15:1--15:35},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.15},
  URN =		{urn:nbn:de:0030-drops-142890},
  doi =		{10.4230/LIPIcs.CCC.2021.15},
  annote =	{Keywords: Query complexity, submodular function minimization, cut dimension}
}
Document
Sensitivity Conjecture and Log-Rank Conjecture for Functions with Small Alternating Numbers

Authors: Chengyu Lin and Shengyu Zhang

Published in: LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)


Abstract
The Sensitivity Conjecture and the Log-rank Conjecture are among the most important and challenging problems in concrete complexity. Incidentally, the Sensitivity Conjecture is known to hold for monotone functions, and so is the Log-rank Conjecture for f(x and y) and f(x xor y) with monotone functions f, where and and xor are bit-wise AND and XOR , respectively. In this paper, we extend these results to functions f which alternate values for a relatively small number of times on any monotone path from 0^n to 1^n. These deepen our understandings of the two conjectures, and contribute to the recent line of research on functions with small alternating numbers.

Cite as

Chengyu Lin and Shengyu Zhang. Sensitivity Conjecture and Log-Rank Conjecture for Functions with Small Alternating Numbers. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 51:1-51:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{lin_et_al:LIPIcs.ICALP.2017.51,
  author =	{Lin, Chengyu and Zhang, Shengyu},
  title =	{{Sensitivity Conjecture and Log-Rank Conjecture for Functions with Small Alternating Numbers}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{51:1--51:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.51},
  URN =		{urn:nbn:de:0030-drops-74045},
  doi =		{10.4230/LIPIcs.ICALP.2017.51},
  annote =	{Keywords: Analysis of Boolean functions, Sensitivity Conjecture, Log-rank Conjecture, Alternating Number}
}
Document
Linear Time Algorithm for Quantum 2SAT

Authors: Itai Arad, Miklos Santha, Aarthi Sundaram, and Shengyu Zhang

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
A canonical result about satisfiability theory is that the 2-SAT problem can be solved in linear time, despite the NP-hardness of the 3-SAT problem. In the quantum 2-SAT problem, we are given a family of 2-qubit projectors Q_{ij} on a system of n qubits, and the task is to decide whether the Hamiltonian H = sum Q_{ij} has a 0-eigenvalue, or it is larger than 1/n^c for some c = O(1). The problem is not only a natural extension of the classical 2-SAT problem to the quantum case, but is also equivalent to the problem of finding the ground state of 2-local frustration-free Hamiltonians of spin 1/2, a well-studied model believed to capture certain key properties in modern condensed matter physics. While Bravyi has shown that the quantum 2-SAT problem has a classical polynomial-time algorithm, the running time of his algorithm is O(n^4). In this paper we give a classical algorithm with linear running time in the number of local projectors, therefore achieving the best possible complexity.

Cite as

Itai Arad, Miklos Santha, Aarthi Sundaram, and Shengyu Zhang. Linear Time Algorithm for Quantum 2SAT. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 15:1-15:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{arad_et_al:LIPIcs.ICALP.2016.15,
  author =	{Arad, Itai and Santha, Miklos and Sundaram, Aarthi and Zhang, Shengyu},
  title =	{{Linear Time Algorithm for Quantum 2SAT}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{15:1--15:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.15},
  URN =		{urn:nbn:de:0030-drops-62795},
  doi =		{10.4230/LIPIcs.ICALP.2016.15},
  annote =	{Keywords: Quantum SAT, Davis-Putnam Procedure, Linear Time Algorithm}
}
Document
On the Complexity of Probabilistic Trials for Hidden Satisfiability Problems

Authors: Itai Arad, Adam Bouland, Daniel Grier, Miklos Santha, Aarthi Sundaram, and Shengyu Zhang

Published in: LIPIcs, Volume 58, 41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016)


Abstract
What is the minimum amount of information and time needed to solve 2SAT? When the instance is known, it can be solved in polynomial time, but is this also possible without knowing the instance? Bei, Chen and Zhang (STOC'13) considered a model where the input is accessed by proposing possible assignments to a special oracle. This oracle, on encountering some constraint unsatisfied by the proposal, returns only the constraint index. It turns out that, in this model, even 1SAT cannot be solved in polynomial time unless P=NP. Hence, we consider a model in which the input is accessed by proposing probability distributions over assignments to the variables. The oracle then returns the index of the constraint that is most likely to be violated by this distribution. We show that the information obtained this way is sufficient to solve 1SAT in polynomial time, even when the clauses can be repeated. For 2SAT, as long as there are no repeated clauses, in polynomial time we can even learn an equivalent formula for the hidden instance and hence also solve it. Furthermore, we extend these results to the quantum regime. We show that in this setting 1QSAT can be solved in polynomial time up to constant precision, and 2QSAT can be learnt in polynomial time up to inverse polynomial precision.

Cite as

Itai Arad, Adam Bouland, Daniel Grier, Miklos Santha, Aarthi Sundaram, and Shengyu Zhang. On the Complexity of Probabilistic Trials for Hidden Satisfiability Problems. In 41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 58, pp. 12:1-12:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{arad_et_al:LIPIcs.MFCS.2016.12,
  author =	{Arad, Itai and Bouland, Adam and Grier, Daniel and Santha, Miklos and Sundaram, Aarthi and Zhang, Shengyu},
  title =	{{On the Complexity of Probabilistic Trials for Hidden Satisfiability Problems}},
  booktitle =	{41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016)},
  pages =	{12:1--12:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-016-3},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{58},
  editor =	{Faliszewski, Piotr and Muscholl, Anca and Niedermeier, Rolf},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2016.12},
  URN =		{urn:nbn:de:0030-drops-64284},
  doi =		{10.4230/LIPIcs.MFCS.2016.12},
  annote =	{Keywords: computational complexity, satisfiability problems, trial and error, quantum computing, learning theory}
}
Document
On The I/O Complexity of Dynamic Distinct Counting

Authors: Xiaocheng Hu, Yufei Tao, Yi Yang, Shengyu Zhang, and Shuigeng Zhou

Published in: LIPIcs, Volume 31, 18th International Conference on Database Theory (ICDT 2015)


Abstract
In dynamic distinct counting, we want to maintain a multi-set S of integers under insertions to answer efficiently the query: how many distinct elements are there in S? In external memory, the problem admits two standard solutions. The first one maintains $S$ in a hash structure, so that the distinct count can be incrementally updated after each insertion using O(1) expected I/Os. A query is answered for free. The second one stores S in a linked list, and thus supports an insertion in O(1/B) amortized I/Os. A query can be answered in O(N/B log_{M/B} (N/B)) I/Os by sorting, where N=|S|, B is the block size, and M is the memory size. In this paper, we show that the above two naive solutions are already optimal within a polylog factor. Specifically, for any Las Vegas structure using N^{O(1)} blocks, if its expected amortized insertion cost is o(1/log B}), then it must incur Omega(N/(B log B)) expected I/Os answering a query in the worst case, under the (realistic) condition that N is a polynomial of B. This means that the problem is repugnant to update buffering: the query cost jumps from 0 dramatically to almost linearity as soon as the insertion cost drops slightly below Omega(1).

Cite as

Xiaocheng Hu, Yufei Tao, Yi Yang, Shengyu Zhang, and Shuigeng Zhou. On The I/O Complexity of Dynamic Distinct Counting. In 18th International Conference on Database Theory (ICDT 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 31, pp. 265-276, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{hu_et_al:LIPIcs.ICDT.2015.265,
  author =	{Hu, Xiaocheng and Tao, Yufei and Yang, Yi and Zhang, Shengyu and Zhou, Shuigeng},
  title =	{{On The I/O Complexity of Dynamic Distinct Counting}},
  booktitle =	{18th International Conference on Database Theory (ICDT 2015)},
  pages =	{265--276},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-79-8},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{31},
  editor =	{Arenas, Marcelo and Ugarte, Mart{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICDT.2015.265},
  URN =		{urn:nbn:de:0030-drops-49895},
  doi =		{10.4230/LIPIcs.ICDT.2015.265},
  annote =	{Keywords: distinct counting, lower bound, external memory}
}

Zhang, Min

Document
On Reachability Analysis of Pushdown Systems with Transductions: Application to Boolean Programs with Call-by-Reference

Authors: Fu Song, Weikai Miao, Geguang Pu, and Min Zhang

Published in: LIPIcs, Volume 42, 26th International Conference on Concurrency Theory (CONCUR 2015)


Abstract
Pushdown systems with transductions (TrPDSs) are an extension of pushdown systems (PDSs) by associating each transition rule with a transduction, which allows to inspect and modify the stack content at each step of a transition rule. It was shown by Uezato and Minamide that TrPDSs can model PDSs with checkpoint and discrete-timed PDSs. Moreover, TrPDSs can be simulated by PDSs and the predecessor configurations pre^*(C) of a regular set C of configurations can be computed by a saturation procedure when the closure of the transductions in TrPDSs is finite. In this work, we comprehensively investigate the reachability problem of finite TrPDSs. We propose a novel saturation procedure to compute pre^*(C) for finite TrPDSs. Also, we introduce a saturation procedure to compute the successor configurations post^*(C) of a regular set C of configurations for finite TrPDSs. From these two saturation procedures, we present two efficient implementation algorithms to compute pre^*(C) and post^*(C). Finally, we show how the presence of transductions enables the modeling of Boolean programs with call-by-reference parameter passing. The TrPDS model has finite closure of transductions which results in model-checking approach for Boolean programs with call-by-reference parameter passing against safety properties.

Cite as

Fu Song, Weikai Miao, Geguang Pu, and Min Zhang. On Reachability Analysis of Pushdown Systems with Transductions: Application to Boolean Programs with Call-by-Reference. In 26th International Conference on Concurrency Theory (CONCUR 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 42, pp. 383-397, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{song_et_al:LIPIcs.CONCUR.2015.383,
  author =	{Song, Fu and Miao, Weikai and Pu, Geguang and Zhang, Min},
  title =	{{On Reachability Analysis of Pushdown Systems with Transductions: Application to Boolean Programs with Call-by-Reference}},
  booktitle =	{26th International Conference on Concurrency Theory (CONCUR 2015)},
  pages =	{383--397},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-91-0},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{42},
  editor =	{Aceto, Luca and de Frutos Escrig, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2015.383},
  URN =		{urn:nbn:de:0030-drops-53624},
  doi =		{10.4230/LIPIcs.CONCUR.2015.383},
  annote =	{Keywords: Verification, Reachability problem, Pushdown system with transductions, Boolean programs with call-by-reference}
}

Zhang, Guochuan

Document
Track A: Algorithms, Complexity and Games
Approximation Algorithms for Interdiction Problem with Packing Constraints

Authors: Lin Chen, Xiaoyu Wu, and Guochuan Zhang

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We study a bilevel optimization problem which is a zero-sum Stackelberg game. In this problem, there are two players, a leader and a follower, who pick items from a common set. Both the leader and the follower have their own (multi-dimensional) budgets, respectively. Each item is associated with a profit, which is the same to the leader and the follower, and will consume the leader’s (follower’s) budget if it is selected by the leader (follower). The leader and the follower will select items in a sequential way: First, the leader selects items within the leader’s budget. Then the follower selects items from the remaining items within the follower’s budget. The goal of the leader is to minimize the maximum profit that the follower can obtain. Let s_A and s_B be the dimension of the leader’s and follower’s budget, respectively. A special case of our problem is the bilevel knapsack problem studied by Caprara et al. [SIAM Journal on Optimization, 2014], where s_A = s_B = 1. We consider the general problem and obtain an (s_B+ε)-approximation algorithm when s_A and s_B are both constant. In particular, if s_B = 1, our algorithm implies a PTAS for the bilevel knapsack problem, which is the first 𝒪(1)-approximation algorithm. We also complement our result by showing that there does not exist any (4/3-ε)-approximation algorithm even if s_A = 1 and s_B = 2. We also consider a variant of our problem with resource augmentation when s_A and s_B are both part of the input. We obtain an 𝒪(1)-approximation algorithm with 𝒪(1)-resource augmentation, that is, we give an algorithm that returns a solution which exceeds the given leader’s budget by 𝒪(1) times, and the objective value achieved by the solution is 𝒪(1) times the optimal objective value that respects the leader’s budget.

Cite as

Lin Chen, Xiaoyu Wu, and Guochuan Zhang. Approximation Algorithms for Interdiction Problem with Packing Constraints. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 39:1-39:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2022.39,
  author =	{Chen, Lin and Wu, Xiaoyu and Zhang, Guochuan},
  title =	{{Approximation Algorithms for Interdiction Problem with Packing Constraints}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{39:1--39:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.39},
  URN =		{urn:nbn:de:0030-drops-163806},
  doi =		{10.4230/LIPIcs.ICALP.2022.39},
  annote =	{Keywords: Bilevel Integer Programming, Interdiction Constraints, Knapsack}
}
Document
Complete Volume
LIPIcs, Volume 149, ISAAC'19, Complete Volume

Authors: Pinyan Lu and Guochuan Zhang

Published in: LIPIcs, Volume 149, 30th International Symposium on Algorithms and Computation (ISAAC 2019)


Abstract
LIPIcs, Volume 149, ISAAC'19, Complete Volume

Cite as

30th International Symposium on Algorithms and Computation (ISAAC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 149, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Proceedings{lu_et_al:LIPIcs.ISAAC.2019,
  title =	{{LIPIcs, Volume 149, ISAAC'19, Complete Volume}},
  booktitle =	{30th International Symposium on Algorithms and Computation (ISAAC 2019)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-130-6},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{149},
  editor =	{Lu, Pinyan and Zhang, Guochuan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2019},
  URN =		{urn:nbn:de:0030-drops-116417},
  doi =		{10.4230/LIPIcs.ISAAC.2019},
  annote =	{Keywords: Theory of computation; Models of computation; Computational complexity and cryptography; Randomness, geometry and discrete structures; Theory and algorithms for application domains; Design and analysis of algorithms}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Symposium Organization

Authors: Pinyan Lu and Guochuan Zhang

Published in: LIPIcs, Volume 149, 30th International Symposium on Algorithms and Computation (ISAAC 2019)


Abstract
Front Matter, Table of Contents, Preface, Symposium Organization

Cite as

30th International Symposium on Algorithms and Computation (ISAAC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 149, pp. 0:i-0:xvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{lu_et_al:LIPIcs.ISAAC.2019.0,
  author =	{Lu, Pinyan and Zhang, Guochuan},
  title =	{{Front Matter, Table of Contents, Preface, Symposium Organization}},
  booktitle =	{30th International Symposium on Algorithms and Computation (ISAAC 2019)},
  pages =	{0:i--0:xvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-130-6},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{149},
  editor =	{Lu, Pinyan and Zhang, Guochuan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2019.0},
  URN =		{urn:nbn:de:0030-drops-114967},
  doi =		{10.4230/LIPIcs.ISAAC.2019.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Symposium Organization}
}
Document
Parameterized and Approximation Results for Scheduling with a Low Rank Processing Time Matrix

Authors: Lin Chen, Dániel Marx, Deshi Ye, and Guochuan Zhang

Published in: LIPIcs, Volume 66, 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)


Abstract
We study approximation and parameterized algorithms for R||C_max, focusing on the problem when the rank of the matrix formed by job processing times is small. Bhaskara et al. initiated the study of approximation algorithms with respect to the rank, showing that R||C_max admits a QPTAS (Quasi-polynomial time approximation scheme) when the rank is 2, and becomes APX-hard when the rank is 4. We continue this line of research. We prove that R||C_max is APX-hard even if the rank is 3, resolving an open problem. We then show that R||C_max is FPT parameterized by the rank and the largest job processing time p_max. This generalizes the parameterized results on P||C_max and R||C_max with few different types of machines. We also provide nearly tight lower bounds under Exponential Time Hypothesis which suggests that the running time of the FPT algorithm is unlikely to be improved significantly.

Cite as

Lin Chen, Dániel Marx, Deshi Ye, and Guochuan Zhang. Parameterized and Approximation Results for Scheduling with a Low Rank Processing Time Matrix. In 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 66, pp. 22:1-22:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.STACS.2017.22,
  author =	{Chen, Lin and Marx, D\'{a}niel and Ye, Deshi and Zhang, Guochuan},
  title =	{{Parameterized and Approximation Results for Scheduling with a Low Rank Processing Time Matrix}},
  booktitle =	{34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)},
  pages =	{22:1--22:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-028-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{66},
  editor =	{Vollmer, Heribert and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2017.22},
  URN =		{urn:nbn:de:0030-drops-70110},
  doi =		{10.4230/LIPIcs.STACS.2017.22},
  annote =	{Keywords: APX-hardness, Parameterized algorithm, Scheduling, Exponential Time Hypothesis}
}
Document
Approximation Algorithms for Parallel Machine Scheduling with Speed-up Resources

Authors: Lin Chen, Deshi Ye, and Guochuan Zhang

Published in: LIPIcs, Volume 60, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)


Abstract
We consider the problem of scheduling with renewable speed-up resources. Given m identical machines, n jobs and c different discrete resources, the task is to schedule each job non-preemptively onto one of the machines so as to minimize the makespan. In our problem, a job has its original processing time, which could be reduced by utilizing one of the resources. As resources are different, the amount of the time reduced for each job is different depending on the resource it uses. Once a resource is being used by one job, it can not be used simultaneously by any other job until this job is finished, hence the scheduler should take into account the job-to-machine assignment together with the resource-to-job assignment. We observe that, the classical unrelated machine scheduling problem is actually a special case of our problem when m=c, i.e., the number of resources equals the number of machines. Extending the techniques for the unrelated machine scheduling, we give a 2-approximation algorithm when both m and c are part of the input. We then consider two special cases for the problem, with m or c being a constant, and derive PTASes (Polynomial Time Approximation Schemes) respectively. We also establish the relationship between the two parameters m and c, through which we are able to transform the PTAS for the case when m is constant to the case when c is a constant. The relationship between the two parameters reveals the structure within the problem, and may be of independent interest.

Cite as

Lin Chen, Deshi Ye, and Guochuan Zhang. Approximation Algorithms for Parallel Machine Scheduling with Speed-up Resources. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 60, pp. 5:1-5:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.APPROX-RANDOM.2016.5,
  author =	{Chen, Lin and Ye, Deshi and Zhang, Guochuan},
  title =	{{Approximation Algorithms for Parallel Machine Scheduling with Speed-up Resources}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)},
  pages =	{5:1--5:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-018-7},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{60},
  editor =	{Jansen, Klaus and Mathieu, Claire and Rolim, Jos\'{e} D. P. and Umans, Chris},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2016.5},
  URN =		{urn:nbn:de:0030-drops-66283},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2016.5},
  annote =	{Keywords: approximation algorithms, scheduling, linear programming}
}
Document
Packing Groups of Items into Multiple Knapsacks

Authors: Lin Chen and Guochuan Zhang

Published in: LIPIcs, Volume 47, 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)


Abstract
We consider a natural generalization of the classical multiple knapsack problem in which instead of packing single items we are packing groups of items. In this problem, we have multiple knapsacks and a set of items which are partitioned into groups. Each item has an individual weight, while the profit is associated with groups rather than items. The profit of a group can be attained if and only if every item of this group is packed. Such a general model finds applications in various practical problems, e.g., delivering bundles of goods. The tractability of this problem relies heavily on how large a group could be. Deciding if a group of items of total weight 2 could be packed into two knapsacks of unit capacity is already NP-hard and it thus rules out a constant-approximation algorithm for this problem in general. We then focus on the parameterized version where the total weight of items in each group is bounded by a factor delta of the total capacity of all knapsacks. Both approximation and inapproximability results with respect to delta are derived. We also show that, depending on whether the number of knapsacks is a constant or part of the input, the approximation ratio for the problem, as a function on delta, changes substantially, which has a clear difference from the classical multiple knapsack problem.

Cite as

Lin Chen and Guochuan Zhang. Packing Groups of Items into Multiple Knapsacks. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 28:1-28:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.STACS.2016.28,
  author =	{Chen, Lin and Zhang, Guochuan},
  title =	{{Packing Groups of Items into Multiple Knapsacks}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{28:1--28:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.28},
  URN =		{urn:nbn:de:0030-drops-57299},
  doi =		{10.4230/LIPIcs.STACS.2016.28},
  annote =	{Keywords: approximation algorithms, lower bound, multiple knapsack, bin packing}
}

Zhang, Xue

Document
Complexity Classification of Two-Qubit Commuting Hamiltonians

Authors: Adam Bouland, Laura Mancinska, and Xue Zhang

Published in: LIPIcs, Volume 50, 31st Conference on Computational Complexity (CCC 2016)


Abstract
We classify two-qubit commuting Hamiltonians in terms of their computational complexity. Suppose one has a two-qubit commuting Hamiltonian H which one can apply to any pair of qubits, starting in a computational basis state. We prove a dichotomy theorem: either this model is efficiently classically simulable or it allows one to sample from probability distributions which cannot be sampled from classically unless the polynomial hierarchy collapses. Furthermore, the only simulable Hamiltonians are those which fail to generate entanglement. This shows that generic two-qubit commuting Hamiltonians can be used to perform computational tasks which are intractable for classical computers under plausible assumptions. Our proof makes use of new postselection gadgets and Lie theory.

Cite as

Adam Bouland, Laura Mancinska, and Xue Zhang. Complexity Classification of Two-Qubit Commuting Hamiltonians. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 28:1-28:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bouland_et_al:LIPIcs.CCC.2016.28,
  author =	{Bouland, Adam and Mancinska, Laura and Zhang, Xue},
  title =	{{Complexity Classification of Two-Qubit Commuting Hamiltonians}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{28:1--28:33},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.28},
  URN =		{urn:nbn:de:0030-drops-58469},
  doi =		{10.4230/LIPIcs.CCC.2016.28},
  annote =	{Keywords: Quantum Computing, Sampling Problems, Commuting Hamiltonians, IQP, Gate Classification Theorems}
}

Zhang, Peng

Document
Efficient 1-Laplacian Solvers for Well-Shaped Simplicial Complexes: Beyond Betti Numbers and Collapsing Sequences

Authors: Ming Ding and Peng Zhang

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
We present efficient algorithms for solving systems of linear equations in 1-Laplacians of well-shaped simplicial complexes. 1-Laplacians, or higher-dimensional Laplacians, generalize graph Laplacians to higher-dimensional simplicial complexes and play a key role in computational topology and topological data analysis. Previously, nearly-linear time solvers were developed for simplicial complexes with known collapsing sequences and bounded Betti numbers, such as those triangulating a three-ball in ℝ³ (Cohen, Fasy, Miller, Nayyeri, Peng, and Walkington [SODA'2014], Black, Maxwell, Nayyeri, and Winkelman [SODA'2022], Black and Nayyeri [ICALP'2022]). Furthermore, Nested Dissection provides quadratic time solvers for more general systems with nonzero structures representing well-shaped simplicial complexes embedded in ℝ³. We generalize the specialized solvers for 1-Laplacians to simplicial complexes with additional geometric structures but without collapsing sequences and bounded Betti numbers, and we improve the runtime of Nested Dissection. We focus on simplicial complexes that meet two conditions: (1) each individual simplex has a bounded aspect ratio, and (2) they can be divided into "disjoint" and balanced regions with well-shaped interiors and boundaries. Our solvers draw inspiration from the Incomplete Nested Dissection for stiffness matrices of well-shaped trusses (Kyng, Peng, Schwieterman, and Zhang [STOC'2018]).

Cite as

Ming Ding and Peng Zhang. Efficient 1-Laplacian Solvers for Well-Shaped Simplicial Complexes: Beyond Betti Numbers and Collapsing Sequences. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 41:1-41:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{ding_et_al:LIPIcs.ESA.2023.41,
  author =	{Ding, Ming and Zhang, Peng},
  title =	{{Efficient 1-Laplacian Solvers for Well-Shaped Simplicial Complexes: Beyond Betti Numbers and Collapsing Sequences}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{41:1--41:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.41},
  URN =		{urn:nbn:de:0030-drops-186947},
  doi =		{10.4230/LIPIcs.ESA.2023.41},
  annote =	{Keywords: 1-Laplacian Solvers, Simplicial Complexes, Incomplete Nested Dissection}
}
Document
APPROX
Hardness Results for Weaver’s Discrepancy Problem

Authors: Daniel A. Spielman and Peng Zhang

Published in: LIPIcs, Volume 245, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)


Abstract
Marcus, Spielman and Srivastava (Annals of Mathematics 2014) solved the Kadison-Singer Problem by proving a strong form of Weaver’s conjecture: they showed that for all α > 0 and all lists of vectors of norm at most √α whose outer products sum to the identity, there exists a signed sum of those outer products with operator norm at most √{8α} + 2α. We prove that it is NP-hard to distinguish such a list of vectors for which there is a signed sum that equals the zero matrix from those in which every signed sum has operator norm at least η √α, for some absolute constant η > 0. Thus, it is NP-hard to construct a signing that is a constant factor better than that guaranteed to exist. For α = 1/4, we prove that it is NP-hard to distinguish whether there is a signed sum that equals the zero matrix from the case in which every signed sum has operator norm at least 1/4.

Cite as

Daniel A. Spielman and Peng Zhang. Hardness Results for Weaver’s Discrepancy Problem. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 245, pp. 40:1-40:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{spielman_et_al:LIPIcs.APPROX/RANDOM.2022.40,
  author =	{Spielman, Daniel A. and Zhang, Peng},
  title =	{{Hardness Results for Weaver’s Discrepancy Problem}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)},
  pages =	{40:1--40:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-249-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{245},
  editor =	{Chakrabarti, Amit and Swamy, Chaitanya},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2022.40},
  URN =		{urn:nbn:de:0030-drops-171628},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2022.40},
  annote =	{Keywords: Discrepancy Problem, Kadison-Singer Problem, Hardness of Approximation}
}
Document
Track A: Algorithms, Complexity and Games
Hardness Results for Laplacians of Simplicial Complexes via Sparse-Linear Equation Complete Gadgets

Authors: Ming Ding, Rasmus Kyng, Maximilian Probst Gutenberg, and Peng Zhang

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We study linear equations in combinatorial Laplacians of k-dimensional simplicial complexes (k-complexes), a natural generalization of graph Laplacians. Combinatorial Laplacians play a crucial role in homology and are a central tool in topology. Beyond this, they have various applications in data analysis and physical modeling problems. It is known that nearly-linear time solvers exist for graph Laplacians. However, nearly-linear time solvers for combinatorial Laplacians are only known for restricted classes of complexes. This paper shows that linear equations in combinatorial Laplacians of 2-complexes are as hard to solve as general linear equations. More precisely, for any constant c ≥ 1, if we can solve linear equations in combinatorial Laplacians of 2-complexes up to high accuracy in time Õ((# of nonzero coefficients)^c), then we can solve general linear equations with polynomially bounded integer coefficients and condition numbers up to high accuracy in time Õ((# of nonzero coefficients)^c). We prove this by a nearly-linear time reduction from general linear equations to combinatorial Laplacians of 2-complexes. Our reduction preserves the sparsity of the problem instances up to poly-logarithmic factors.

Cite as

Ming Ding, Rasmus Kyng, Maximilian Probst Gutenberg, and Peng Zhang. Hardness Results for Laplacians of Simplicial Complexes via Sparse-Linear Equation Complete Gadgets. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 53:1-53:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ding_et_al:LIPIcs.ICALP.2022.53,
  author =	{Ding, Ming and Kyng, Rasmus and Gutenberg, Maximilian Probst and Zhang, Peng},
  title =	{{Hardness Results for Laplacians of Simplicial Complexes via Sparse-Linear Equation Complete Gadgets}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{53:1--53:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.53},
  URN =		{urn:nbn:de:0030-drops-163945},
  doi =		{10.4230/LIPIcs.ICALP.2022.53},
  annote =	{Keywords: Simplicial Complexes, Combinatorial Laplacians, Linear Equations, Fine-Grained Complexity}
}
Document
Track A: Algorithms, Complexity and Games
Two-Commodity Flow Is Equivalent to Linear Programming Under Nearly-Linear Time Reductions

Authors: Ming Ding, Rasmus Kyng, and Peng Zhang

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We give a nearly-linear time reduction that encodes any linear program as a 2-commodity flow problem with only a small blow-up in size. Under mild assumptions similar to those employed by modern fast solvers for linear programs, our reduction causes only a polylogarithmic multiplicative increase in the size of the program, and runs in nearly-linear time. Our reduction applies to high-accuracy approximation algorithms and exact algorithms. Given an approximate solution to the 2-commodity flow problem, we can extract a solution to the linear program in linear time with only a polynomial factor increase in the error. This implies that any algorithm that solves the 2-commodity flow problem can solve linear programs in essentially the same time. Given a directed graph with edge capacities and two source-sink pairs, the goal of the 2-commodity flow problem is to maximize the sum of the flows routed between the two source-sink pairs subject to edge capacities and flow conservation. A 2-commodity flow problem can be formulated as a linear program, which can be solved to high accuracy in almost the current matrix multiplication time (Cohen-Lee-Song JACM'21). Our reduction shows that linear programs can be approximately solved, to high accuracy, using 2-commodity flow as well. Our proof follows the outline of Itai’s polynomial-time reduction of a linear program to a 2-commodity flow problem (JACM’78). Itai’s reduction shows that exactly solving 2-commodity flow and exactly solving linear programming are polynomial-time equivalent. We improve Itai’s reduction to nearly preserve the problem representation size in each step. In addition, we establish an error bound for approximately solving each intermediate problem in the reduction, and show that the accumulated error is polynomially bounded. We remark that our reduction does not run in strongly polynomial time and that it is open whether 2-commodity flow and linear programming are equivalent in strongly polynomial time.

Cite as

Ming Ding, Rasmus Kyng, and Peng Zhang. Two-Commodity Flow Is Equivalent to Linear Programming Under Nearly-Linear Time Reductions. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 54:1-54:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ding_et_al:LIPIcs.ICALP.2022.54,
  author =	{Ding, Ming and Kyng, Rasmus and Zhang, Peng},
  title =	{{Two-Commodity Flow Is Equivalent to Linear Programming Under Nearly-Linear Time Reductions}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{54:1--54:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.54},
  URN =		{urn:nbn:de:0030-drops-163950},
  doi =		{10.4230/LIPIcs.ICALP.2022.54},
  annote =	{Keywords: Two-Commodity Flow Problems, Linear Programming, Fine-Grained Complexity}
}
Document
A (1.4 + epsilon)-Approximation Algorithm for the 2-Max-Duo Problem

Authors: Yao Xu, Yong Chen, Guohui Lin, Tian Liu, Taibo Luo, and Peng Zhang

Published in: LIPIcs, Volume 92, 28th International Symposium on Algorithms and Computation (ISAAC 2017)


Abstract
The maximum duo-preservation string mapping (Max-Duo) problem is the complement of the well studied minimum common string partition (MCSP) problem, both of which have applications in many fields including text compression and bioinformatics. k-Max-Duo is the restricted version of Max-Duo, where every letter of the alphabet occurs at most k times in each of the strings, which is readily reduced into the well known maximum independent set (MIS) problem on a graph of maximum degree \Delta \le 6(k-1). In particular, 2-Max-Duo can then be approximated arbitrarily close to 1.8 using the state-of-the-art approximation algorithm for the MIS problem. 2-Max-Duo was proved APX-hard and very recently a (1.6 + \epsilon)-approximation was claimed, for any \epsilon > 0. In this paper, we present a vertex-degree reduction technique, based on which, we show that 2-Max-Duo can be approximated arbitrarily close to 1.4.

Cite as

Yao Xu, Yong Chen, Guohui Lin, Tian Liu, Taibo Luo, and Peng Zhang. A (1.4 + epsilon)-Approximation Algorithm for the 2-Max-Duo Problem. In 28th International Symposium on Algorithms and Computation (ISAAC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 92, pp. 66:1-66:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{xu_et_al:LIPIcs.ISAAC.2017.66,
  author =	{Xu, Yao and Chen, Yong and Lin, Guohui and Liu, Tian and Luo, Taibo and Zhang, Peng},
  title =	{{A (1.4 + epsilon)-Approximation Algorithm for the 2-Max-Duo Problem}},
  booktitle =	{28th International Symposium on Algorithms and Computation (ISAAC 2017)},
  pages =	{66:1--66:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-054-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{92},
  editor =	{Okamoto, Yoshio and Tokuyama, Takeshi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2017.66},
  URN =		{urn:nbn:de:0030-drops-82120},
  doi =		{10.4230/LIPIcs.ISAAC.2017.66},
  annote =	{Keywords: Approximation algorithm, duo-preservation string mapping, string partition, independent set}
}
Document
Approximating the Solution to Mixed Packing and Covering LPs in Parallel O˜(epsilon^{-3}) Time

Authors: Michael W. Mahoney, Satish Rao, Di Wang, and Peng Zhang

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
We study the problem of approximately solving positive linear programs (LPs). This class of LPs models a wide range of fundamental problems in combinatorial optimization and operations research, such as many resource allocation problems, solving non-negative linear systems, computing tomography, single/multi commodity flows on graphs, etc. For the special cases of pure packing or pure covering LPs, recent result by Allen-Zhu and Orecchia [Allen/Zhu/Orecchia, SODA'15] gives O˜(1/(epsilon^3))-time parallel algorithm, which breaks the longstanding O˜(1/(epsilon^4)) running time bound by the seminal work of Luby and Nisan [Luby/Nisan, STOC'93]. We present new parallel algorithm with running time O˜(1/(epsilon^3)) for the more general mixed packing and covering LPs, which improves upon the O˜(1/(epsilon^4))-time algorithm of Young [Young, FOCS'01; Young, arXiv 2014]. Our work leverages the ideas from both the optimization oriented approach [Allen/Zhu/Orecchia, SODA'15; Wang/Mahoney/Mohan/Rao, arXiv 2015], as well as the more combinatorial approach with phases [Young, FOCS'01; Young, arXiv 2014]. In addition, our algorithm, when directly applied to pure packing or pure covering LPs, gives a improved running time of O˜(1/(epsilon^2)).

Cite as

Michael W. Mahoney, Satish Rao, Di Wang, and Peng Zhang. Approximating the Solution to Mixed Packing and Covering LPs in Parallel O˜(epsilon^{-3}) Time. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 52:1-52:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{mahoney_et_al:LIPIcs.ICALP.2016.52,
  author =	{Mahoney, Michael W. and Rao, Satish and Wang, Di and Zhang, Peng},
  title =	{{Approximating the Solution to Mixed Packing and Covering LPs in Parallel O˜(epsilon^\{-3\}) Time}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{52:1--52:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.52},
  URN =		{urn:nbn:de:0030-drops-63335},
  doi =		{10.4230/LIPIcs.ICALP.2016.52},
  annote =	{Keywords: Mixed packing and covering, Linear program, Approximation algorithm, Parallel algorithm}
}

Zhang, Yifeng

Document
Scheduling Problems over Network of Machines

Authors: Zachary Friggstad, Arnoosh Golestanian, Kamyar Khodamoradi, Christopher Martin, Mirmahdi Rahgoshay, Mohsen Rezapour, Mohammad R. Salavatipour, and Yifeng Zhang

Published in: LIPIcs, Volume 81, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)


Abstract
We consider scheduling problems in which jobs need to be processed through a (shared) network of machines. The network is given in the form of a graph the edges of which represent the machines. We are also given a set of jobs, each specified by its processing time and a path in the graph. Every job needs to be processed in the order of edges specified by its path. We assume that jobs can wait between machines and preemption is not allowed; that is, once a job is started being processed on a machine, it must be completed without interruption. Every machine can only process one job at a time. The makespan of a schedule is the earliest time by which all the jobs have finished processing. The flow time (a.k.a. the completion time) of a job in a schedule is the difference in time between when it finishes processing on its last machine and when the it begins processing on its first machine. The total flow time (or the sum of completion times) is the sum of flow times (or completion times) of all jobs. Our focus is on finding schedules with the minimum sum of completion times or minimum makespan. In this paper, we develop several algorithms (both approximate and exact) for the problem both on general graphs and when the underlying graph of machines is a tree. Even in the very special case when the underlying network is a simple star, the problem is very interesting as it models a biprocessor scheduling with applications to data migration.

Cite as

Zachary Friggstad, Arnoosh Golestanian, Kamyar Khodamoradi, Christopher Martin, Mirmahdi Rahgoshay, Mohsen Rezapour, Mohammad R. Salavatipour, and Yifeng Zhang. Scheduling Problems over Network of Machines. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 81, pp. 5:1-5:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{friggstad_et_al:LIPIcs.APPROX-RANDOM.2017.5,
  author =	{Friggstad, Zachary and Golestanian, Arnoosh and Khodamoradi, Kamyar and Martin, Christopher and Rahgoshay, Mirmahdi and Rezapour, Mohsen and Salavatipour, Mohammad R. and Zhang, Yifeng},
  title =	{{Scheduling Problems over Network of Machines}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)},
  pages =	{5:1--5:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-044-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{81},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} D. P. and Williamson, David P. and Vempala, Santosh S.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2017.5},
  URN =		{urn:nbn:de:0030-drops-75547},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2017.5},
  annote =	{Keywords: approximation algorithms, job-shop scheduling, min-sum edge coloring, minimum latency}
}
Document
Tight Analysis of a Multiple-Swap Heurstic for Budgeted Red-Blue Median

Authors: Zachary Friggstad and Yifeng Zhang

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
Budgeted Red-Blue Median is a generalization of classic k-Median in that there are two sets of facilities, say R and B, that can be used to serve clients located in some metric space. The goal is to open kr facilities in R and kb facilities in B for some given bounds kr, kb and connect each client to their nearest open facility in a way that minimizes the total connection cost. We extend work by Hajiaghayi, Khandekar, and Kortsarz [2012] and show that a multipleswap local search heuristic can be used to obtain a (5 + epsilon)-approximation for Budgeted RedBlue Median for any constant epsilon > 0. This is an improvement over their single swap analysis and beats the previous best approximation guarantee of 8 by Swamy [2014]. We also present a matching lower bound showing that for every p >= 1, there are instances of Budgeted Red-Blue Median with local optimum solutions for the p-swap heuristic whose cost is 5 + Omega(1/p) times the optimum solution cost. Thus, our analysis is tight up to the lower order terms. In particular, for any epsilon > 0 we show the single-swap heuristic admits local optima whose cost can be as bad as 7 - epsilon times the optimum solution cost.

Cite as

Zachary Friggstad and Yifeng Zhang. Tight Analysis of a Multiple-Swap Heurstic for Budgeted Red-Blue Median. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 75:1-75:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{friggstad_et_al:LIPIcs.ICALP.2016.75,
  author =	{Friggstad, Zachary and Zhang, Yifeng},
  title =	{{Tight Analysis of a Multiple-Swap Heurstic for Budgeted Red-Blue Median}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{75:1--75:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.75},
  URN =		{urn:nbn:de:0030-drops-62094},
  doi =		{10.4230/LIPIcs.ICALP.2016.75},
  annote =	{Keywords: Approximation Algorithms, Local search, Red-Blue Meidan}
}

Zhang, Jinshan

Document
House Markets with Matroid and Knapsack Constraints

Authors: Piotr Krysta and Jinshan Zhang

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
Classical online bipartite matching problem and its generalizations are central algorithmic optimization problems. The second related line of research is in the area of algorithmic mechanism design, referring to the broad class of house allocation or assignment problems. We introduce a single framework that unifies and generalizes these two streams of models. Our generalizations allow for arbitrary matroid constraints or knapsack constraints at every object in the allocation problem. We design and analyze approximation algorithms and truthful mechanisms for this framework. Our algorithms have best possible approximation guarantees for most of the special instantiations of this framework, and are strong generalizations of the previous known results.

Cite as

Piotr Krysta and Jinshan Zhang. House Markets with Matroid and Knapsack Constraints. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 141:1-141:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{krysta_et_al:LIPIcs.ICALP.2016.141,
  author =	{Krysta, Piotr and Zhang, Jinshan},
  title =	{{House Markets with Matroid and Knapsack Constraints}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{141:1--141:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.141},
  URN =		{urn:nbn:de:0030-drops-62853},
  doi =		{10.4230/LIPIcs.ICALP.2016.141},
  annote =	{Keywords: Algorithmic mechanism design; Approximation algorithms; Matching under preferences; Matroid and knapsack constraints}
}

Zhang, Lu

Document
Learning to Accelerate Symbolic Execution via Code Transformation

Authors: Junjie Chen, Wenxiang Hu, Lingming Zhang, Dan Hao, Sarfraz Khurshid, and Lu Zhang

Published in: LIPIcs, Volume 109, 32nd European Conference on Object-Oriented Programming (ECOOP 2018)


Abstract
Symbolic execution is an effective but expensive technique for automated test generation. Over the years, a large number of refined symbolic execution techniques have been proposed to improve its efficiency. However, the symbolic execution efficiency problem remains, and largely limits the application of symbolic execution in practice. Orthogonal to refined symbolic execution, in this paper we propose to accelerate symbolic execution through semantic-preserving code transformation on the target programs. During the initial stage of this direction, we adopt a particular code transformation, compiler optimization, which is initially proposed to accelerate program concrete execution by transforming the source program into another semantic-preserving target program with increased efficiency (e.g., faster or smaller). However, compiler optimizations are mostly designed to accelerate program concrete execution rather than symbolic execution. Recent work also reported that unified settings on compiler optimizations that can accelerate symbolic execution for any program do not exist at all. Therefore, in this work we propose a machine-learning based approach to tuning compiler optimizations to accelerate symbolic execution, whose results may also aid further design of specific code transformations for symbolic execution. In particular, the proposed approach LEO separates source-code functions and libraries through our program-splitter, and predicts individual compiler optimization (i.e., whether a type of code transformation is chosen) separately through analyzing the performance of existing symbolic execution. Finally, LEO applies symbolic execution on the code transformed by compiler optimization (through our local-optimizer). We conduct an empirical study on GNU Coreutils programs using the KLEE symbolic execution engine. The results show that LEO significantly accelerates symbolic execution, outperforming the default KLEE configurations (i.e., turning on/off all compiler optimizations) in various settings, e.g., with the default training/testing time, LEO achieves the highest line coverage in 50/68 programs, and its average improvement rate on all programs is 46.48%/88.92% in terms of line coverage compared with turning on/off all compiler optimizations.

Cite as

Junjie Chen, Wenxiang Hu, Lingming Zhang, Dan Hao, Sarfraz Khurshid, and Lu Zhang. Learning to Accelerate Symbolic Execution via Code Transformation. In 32nd European Conference on Object-Oriented Programming (ECOOP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 109, pp. 6:1-6:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ECOOP.2018.6,
  author =	{Chen, Junjie and Hu, Wenxiang and Zhang, Lingming and Hao, Dan and Khurshid, Sarfraz and Zhang, Lu},
  title =	{{Learning to Accelerate Symbolic Execution via Code Transformation}},
  booktitle =	{32nd European Conference on Object-Oriented Programming (ECOOP 2018)},
  pages =	{6:1--6:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-079-8},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{109},
  editor =	{Millstein, Todd},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2018.6},
  URN =		{urn:nbn:de:0030-drops-92115},
  doi =		{10.4230/LIPIcs.ECOOP.2018.6},
  annote =	{Keywords: Symbolic Execution, Code Transformation, Machine Learning}
}
Document
Transforming Programs between APIs with Many-to-Many Mappings

Authors: Chenglong Wang, Jiajun Jiang, Jun Li, Yingfei Xiong, Xiangyu Luo, Lu Zhang, and Zhenjiang Hu

Published in: LIPIcs, Volume 56, 30th European Conference on Object-Oriented Programming (ECOOP 2016)


Abstract
Transforming programs between two APIs or different versions of the same API is a common software engineering task. However, existing languages supporting for such transformation cannot satisfactorily handle the cases when the relations between elements in the old API and the new API are many-to-many mappings: multiple invocations to the old API are supposed to be replaced by multiple invocations to the new API. Since the multiple invocations of the original APIs may not appear consecutively and the variables in these calls may have different names, writing a tool correctly to cover all such invocation cases is not an easy task. In this paper we propose a novel guided-normalization approach to address this problem. Our core insight is that programs in different forms can be semantics-equivalently normalized into a basic form guided by transformation goals, and developers only need to write rules for the basic form to address the transformation. Based on this approach, we design a declarative program transformation language, PATL, for adapting Java programs between different APIs. PATL has simple syntax and basic semantics to handle transformations only considering consecutive statements inside basic blocks, while with guided-normalization, it can be extended to handle complex forms of invocations. Furthermore, PATL ensures that the user-written rules would not accidentally break def-use relations in the program. We formalize the semantics of PATL on Middleweight Java and prove the semantics-preserving property of guided-normalization. We also evaluated our language with three non-trivial case studies: i.e. updating Google Calendar API, switching from JDom to Dom4j, and switching from Swing to SWT. The result is encouraging; it shows that our language allows successful transformations of real world programs with a small number of rules and little manual resolution.

Cite as

Chenglong Wang, Jiajun Jiang, Jun Li, Yingfei Xiong, Xiangyu Luo, Lu Zhang, and Zhenjiang Hu. Transforming Programs between APIs with Many-to-Many Mappings. In 30th European Conference on Object-Oriented Programming (ECOOP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 56, pp. 25:1-25:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{wang_et_al:LIPIcs.ECOOP.2016.25,
  author =	{Wang, Chenglong and Jiang, Jiajun and Li, Jun and Xiong, Yingfei and Luo, Xiangyu and Zhang, Lu and Hu, Zhenjiang},
  title =	{{Transforming Programs between APIs with Many-to-Many Mappings}},
  booktitle =	{30th European Conference on Object-Oriented Programming (ECOOP 2016)},
  pages =	{25:1--25:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-014-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{56},
  editor =	{Krishnamurthi, Shriram and Lerner, Benjamin S.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2016.25},
  URN =		{urn:nbn:de:0030-drops-61195},
  doi =		{10.4230/LIPIcs.ECOOP.2016.25},
  annote =	{Keywords: Program transformation, API migration}
}

Zhang, Jialin

Document
Simple Deterministic Approximation for Submodular Multiple Knapsack Problem

Authors: Xiaoming Sun, Jialin Zhang, and Zhijie Zhang

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
Submodular maximization has been a central topic in theoretical computer science and combinatorial optimization over the last decades. Plenty of well-performed approximation algorithms have been designed for the problem over a variety of constraints. In this paper, we consider the submodular multiple knapsack problem (SMKP). In SMKP, the profits of each subset of elements are specified by a monotone submodular function. The goal is to find a feasible packing of elements over multiple bins (knapsacks) to maximize the profit. Recently, Fairstein et al. [ESA20] proposed a nearly optimal (1-e^{-1}-ε)-approximation algorithm for SMKP. Their algorithm is obtained by combining configuration LP, a grouping technique for bin packing, and the continuous greedy algorithm for submodular maximization. As a result, the algorithm is somewhat sophisticated and inherently randomized. In this paper, we present an arguably simple deterministic combinatorial algorithm for SMKP, which achieves a (1-e^{-1}-ε)-approximation ratio. Our algorithm is based on very different ideas compared with Fairstein et al. [ESA20].

Cite as

Xiaoming Sun, Jialin Zhang, and Zhijie Zhang. Simple Deterministic Approximation for Submodular Multiple Knapsack Problem. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 98:1-98:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{sun_et_al:LIPIcs.ESA.2023.98,
  author =	{Sun, Xiaoming and Zhang, Jialin and Zhang, Zhijie},
  title =	{{Simple Deterministic Approximation for Submodular Multiple Knapsack Problem}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{98:1--98:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.98},
  URN =		{urn:nbn:de:0030-drops-187517},
  doi =		{10.4230/LIPIcs.ESA.2023.98},
  annote =	{Keywords: Submodular maximization, knapsack problem, deterministic algorithm}
}
Document
Track A: Algorithms, Complexity and Games
Querying a Matrix Through Matrix-Vector Products

Authors: Xiaoming Sun, David P. Woodruff, Guang Yang, and Jialin Zhang

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
We consider algorithms with access to an unknown matrix M in F^{n x d} via matrix-vector products, namely, the algorithm chooses vectors v^1, ..., v^q, and observes Mv^1, ..., Mv^q. Here the v^i can be randomized as well as chosen adaptively as a function of Mv^1, ..., Mv^{i-1}. Motivated by applications of sketching in distributed computation, linear algebra, and streaming models, as well as connections to areas such as communication complexity and property testing, we initiate the study of the number q of queries needed to solve various fundamental problems. We study problems in three broad categories, including linear algebra, statistics problems, and graph problems. For example, we consider the number of queries required to approximate the rank, trace, maximum eigenvalue, and norms of a matrix M; to compute the AND/OR/Parity of each column or row of M, to decide whether there are identical columns or rows in M or whether M is symmetric, diagonal, or unitary; or to compute whether a graph defined by M is connected or triangle-free. We also show separations for algorithms that are allowed to obtain matrix-vector products only by querying vectors on the right, versus algorithms that can query vectors on both the left and the right. We also show separations depending on the underlying field the matrix-vector product occurs in. For graph problems, we show separations depending on the form of the matrix (bipartite adjacency versus signed edge-vertex incidence matrix) to represent the graph. Surprisingly, this fundamental model does not appear to have been studied on its own, and we believe a thorough investigation of problems in this model would be beneficial to a number of different application areas.

Cite as

Xiaoming Sun, David P. Woodruff, Guang Yang, and Jialin Zhang. Querying a Matrix Through Matrix-Vector Products. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 94:1-94:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{sun_et_al:LIPIcs.ICALP.2019.94,
  author =	{Sun, Xiaoming and Woodruff, David P. and Yang, Guang and Zhang, Jialin},
  title =	{{Querying a Matrix Through Matrix-Vector Products}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{94:1--94:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.94},
  URN =		{urn:nbn:de:0030-drops-106709},
  doi =		{10.4230/LIPIcs.ICALP.2019.94},
  annote =	{Keywords: Communication complexity, linear algebra, sketching}
}
Document
On the Optimality of Tape Merge of Two Lists with Similar Size

Authors: Qian Li, Xiaoming Sun, and Jialin Zhang

Published in: LIPIcs, Volume 64, 27th International Symposium on Algorithms and Computation (ISAAC 2016)


Abstract
The problem of merging sorted lists in the least number of pairwise comparisons has been solved completely only for a few special cases. Graham and Karp [TAOCP, 1999] independently discovered that the tape merge algorithm is optimal in the worst case when the two lists have the same size. Stockmeyer and Yao [SICOMP, 1980], Murphy and Paull [Inform. Control, 1979], and Christen [1978] independently showed when the lists to be merged are of size m and n satisfying m leq n leq floor(3/2 m) + 1, the tape merge algorithm is optimal in the worst case. This paper extends this result by showing that the tape merge algorithm is optimal in the worst case whenever the size of one list is no larger than 1.52 times the size of the other. The main tool we used to prove lower bounds is Knuth’s adversary methods [TAOCP, 1999]. In addition, we show that the lower bound cannot be improved to 1.8 via Knuth's adversary methods. We also develop a new inequality about Knuth's adversary methods, which might be interesting in its own right. Moreover, we design a simple procedure to achieve constant improvement of the upper bounds for 2m - 2 leq n leq 3m.

Cite as

Qian Li, Xiaoming Sun, and Jialin Zhang. On the Optimality of Tape Merge of Two Lists with Similar Size. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 51:1-51:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ISAAC.2016.51,
  author =	{Li, Qian and Sun, Xiaoming and Zhang, Jialin},
  title =	{{On the Optimality of Tape Merge of Two Lists with Similar Size}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{51:1--51:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.51},
  URN =		{urn:nbn:de:0030-drops-68219},
  doi =		{10.4230/LIPIcs.ISAAC.2016.51},
  annote =	{Keywords: comparison-based sorting, tape merge, optimal sort, adversary method}
}

Zhang, Jie

Document
Hardness Results for Consensus-Halving

Authors: Aris Filos-Ratsikas, Søren Kristoffer Stiil Frederiksen, Paul W. Goldberg, and Jie Zhang

Published in: LIPIcs, Volume 117, 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018)


Abstract
The Consensus-halving problem is the problem of dividing an object into two portions, such that each of n agents has equal valuation for the two portions. We study the epsilon-approximate version, which allows each agent to have an epsilon discrepancy on the values of the portions. It was recently proven in [Filos-Ratsikas and Goldberg, 2018] that the problem of computing an epsilon-approximate Consensus-halving solution (for n agents and n cuts) is PPA-complete when epsilon is inverse-exponential. In this paper, we prove that when epsilon is constant, the problem is PPAD-hard and the problem remains PPAD-hard when we allow a constant number of additional cuts. Additionally, we prove that deciding whether a solution with n-1 cuts exists for the problem is NP-hard.

Cite as

Aris Filos-Ratsikas, Søren Kristoffer Stiil Frederiksen, Paul W. Goldberg, and Jie Zhang. Hardness Results for Consensus-Halving. In 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 117, pp. 24:1-24:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{filosratsikas_et_al:LIPIcs.MFCS.2018.24,
  author =	{Filos-Ratsikas, Aris and Frederiksen, S{\o}ren Kristoffer Stiil and Goldberg, Paul W. and Zhang, Jie},
  title =	{{Hardness Results for Consensus-Halving}},
  booktitle =	{43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018)},
  pages =	{24:1--24:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-086-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{117},
  editor =	{Potapov, Igor and Spirakis, Paul and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2018.24},
  URN =		{urn:nbn:de:0030-drops-96069},
  doi =		{10.4230/LIPIcs.MFCS.2018.24},
  annote =	{Keywords: PPAD, PPA, consensus halving, generalized-circuit, reduction}
}
Document
Smoothed and Average-Case Approximation Ratios of Mechanisms: Beyond the Worst-Case Analysis

Authors: Xiaotie Deng, Yansong Gao, and Jie Zhang

Published in: LIPIcs, Volume 83, 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)


Abstract
The approximation ratio has become one of the dominant measures in mechanism design problems. In light of analysis of algorithms, we define the smoothed approximation ratio to compare the performance of the optimal mechanism and a truthful mechanism when the inputs are subject to random perturbations of the worst-case inputs, and define the average-case approximation ratio to compare the performance of these two mechanisms when the inputs follow a distribution. For the one-sided matching problem, Filos-Ratsikas et al. [2014] show that, amongst all truthful mechanisms, random priority achieves the tight approximation ratio bound of Theta(sqrt{n}). We prove that, despite of this worst-case bound, random priority has a constant smoothed approximation ratio. This is, to our limited knowledge, the first work that asymptotically differentiates the smoothed approximation ratio from the worst-case approximation ratio for mechanism design problems. For the average-case, we show that our approximation ratio can be improved to 1+e. These results partially explain why random priority has been successfully used in practice, although in the worst case the optimal social welfare is Theta(sqrt{n}) times of what random priority achieves. These results also pave the way for further studies of smoothed and average-case analysis for approximate mechanism design problems, beyond the worst-case analysis.

Cite as

Xiaotie Deng, Yansong Gao, and Jie Zhang. Smoothed and Average-Case Approximation Ratios of Mechanisms: Beyond the Worst-Case Analysis. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 16:1-16:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{deng_et_al:LIPIcs.MFCS.2017.16,
  author =	{Deng, Xiaotie and Gao, Yansong and Zhang, Jie},
  title =	{{Smoothed and Average-Case Approximation Ratios of Mechanisms: Beyond the Worst-Case Analysis}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{16:1--16:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.16},
  URN =		{urn:nbn:de:0030-drops-80936},
  doi =		{10.4230/LIPIcs.MFCS.2017.16},
  annote =	{Keywords: mechanism design, approximation ratio, smoothed analysis, average-case analysis}
}

Zhang, Hongyang

Document
Recovery from Non-Decomposable Distance Oracles

Authors: Zhuangfei Hu, Xinda Li, David P. Woodruff, Hongyang Zhang, and Shufan Zhang

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
A line of work has looked at the problem of recovering an input from distance queries. In this setting, there is an unknown sequence s ∈ {0,1}^{≤ n}, and one chooses a set of queries y ∈ {0,1}^𝒪(n) and receives d(s,y) for a distance function d. The goal is to make as few queries as possible to recover s. Although this problem is well-studied for decomposable distances, i.e., distances of the form d(s,y) = ∑_{i=1}^n f(s_i, y_i) for some function f, which includes the important cases of Hamming distance, 𝓁_p-norms, and M-estimators, to the best of our knowledge this problem has not been studied for non-decomposable distances, for which there are important special cases such as edit distance, dynamic time warping (DTW), Fréchet distance, earth mover’s distance, and so on. We initiate the study and develop a general framework for such distances. Interestingly, for some distances such as DTW or Fréchet, exact recovery of the sequence s is provably impossible, and so we show by allowing the characters in y to be drawn from a slightly larger alphabet this then becomes possible. In a number of cases we obtain optimal or near-optimal query complexity. We also study the role of adaptivity for a number of different distance functions. One motivation for understanding non-adaptivity is that the query sequence can be fixed and the distances of the input to the queries provide a non-linear embedding of the input, which can be used in downstream applications involving, e.g., neural networks for natural language processing.

Cite as

Zhuangfei Hu, Xinda Li, David P. Woodruff, Hongyang Zhang, and Shufan Zhang. Recovery from Non-Decomposable Distance Oracles. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 73:1-73:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hu_et_al:LIPIcs.ITCS.2023.73,
  author =	{Hu, Zhuangfei and Li, Xinda and Woodruff, David P. and Zhang, Hongyang and Zhang, Shufan},
  title =	{{Recovery from Non-Decomposable Distance Oracles}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{73:1--73:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.73},
  URN =		{urn:nbn:de:0030-drops-175767},
  doi =		{10.4230/LIPIcs.ITCS.2023.73},
  annote =	{Keywords: Sequence Recovery, Edit Distance, DTW Distance, Fr\'{e}chet Distance}
}
Document
Improved Algorithms for Adaptive Compressed Sensing

Authors: Vasileios Nakos, Xiaofei Shi, David P. Woodruff, and Hongyang Zhang

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
In the problem of adaptive compressed sensing, one wants to estimate an approximately k-sparse vector x in R^n from m linear measurements A_1 x, A_2 x,..., A_m x, where A_i can be chosen based on the outcomes A_1 x,..., A_{i-1} x of previous measurements. The goal is to output a vector x^ for which |x-x^|_p <=C * min_{k-sparse x'} |x-x'|_q, with probability at least 2/3, where C > 0 is an approximation factor. Indyk, Price and Woodruff (FOCS'11) gave an algorithm for p=q=2 for C = 1+epsilon with O((k/epsilon) loglog (n/k)) measurements and O(log^*(k) loglog (n)) rounds of adaptivity. We first improve their bounds, obtaining a scheme with O(k * loglog (n/k) + (k/epsilon) * loglog(1/epsilon)) measurements and O(log^*(k) loglog (n)) rounds, as well as a scheme with O((k/epsilon) * loglog (n log (n/k))) measurements and an optimal O(loglog (n)) rounds. We then provide novel adaptive compressed sensing schemes with improved bounds for (p,p) for every 0 < p < 2. We show that the improvement from O(k log(n/k)) measurements to O(k log log (n/k)) measurements in the adaptive setting can persist with a better epsilon-dependence for other values of p and q. For example, when (p,q) = (1,1), we obtain O(k/sqrt{epsilon} * log log n log^3 (1/epsilon)) measurements. We obtain nearly matching lower bounds, showing our algorithms are close to optimal. Along the way, we also obtain the first nearly-optimal bounds for (p,p) schemes for every 0 < p < 2 even in the non-adaptive setting.

Cite as

Vasileios Nakos, Xiaofei Shi, David P. Woodruff, and Hongyang Zhang. Improved Algorithms for Adaptive Compressed Sensing. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 90:1-90:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{nakos_et_al:LIPIcs.ICALP.2018.90,
  author =	{Nakos, Vasileios and Shi, Xiaofei and Woodruff, David P. and Zhang, Hongyang},
  title =	{{Improved Algorithms for Adaptive Compressed Sensing}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{90:1--90:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.90},
  URN =		{urn:nbn:de:0030-drops-90945},
  doi =		{10.4230/LIPIcs.ICALP.2018.90},
  annote =	{Keywords: Compressed Sensing, Adaptivity, High-Dimensional Vectors}
}
Document
Matrix Completion and Related Problems via Strong Duality

Authors: Maria-Florina Balcan, Yingyu Liang, David P. Woodruff, and Hongyang Zhang

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
This work studies the strong duality of non-convex matrix factorization problems: we show that under certain dual conditions, these problems and its dual have the same optimum. This has been well understood for convex optimization, but little was known for non-convex problems. We propose a novel analytical framework and show that under certain dual conditions, the optimal solution of the matrix factorization program is the same as its bi-dual and thus the global optimality of the non-convex program can be achieved by solving its bi-dual which is convex. These dual conditions are satisfied by a wide class of matrix factorization problems, although matrix factorization problems are hard to solve in full generality. This analytical framework may be of independent interest to non-convex optimization more broadly. We apply our framework to two prototypical matrix factorization problems: matrix completion and robust Principal Component Analysis (PCA). These are examples of efficiently recovering a hidden matrix given limited reliable observations of it. Our framework shows that exact recoverability and strong duality hold with nearly-optimal sample complexity guarantees for matrix completion and robust PCA.

Cite as

Maria-Florina Balcan, Yingyu Liang, David P. Woodruff, and Hongyang Zhang. Matrix Completion and Related Problems via Strong Duality. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 5:1-5:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{balcan_et_al:LIPIcs.ITCS.2018.5,
  author =	{Balcan, Maria-Florina and Liang, Yingyu and Woodruff, David P. and Zhang, Hongyang},
  title =	{{Matrix Completion and Related Problems via Strong Duality}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{5:1--5:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.5},
  URN =		{urn:nbn:de:0030-drops-83583},
  doi =		{10.4230/LIPIcs.ITCS.2018.5},
  annote =	{Keywords: Non-Convex Optimization, Strong Duality, Matrix Completion, Robust PCA, Sample Complexity}
}

Zhang, Qiuyi

Document
Convergence Results for Neural Networks via Electrodynamics

Authors: Rina Panigrahy, Ali Rahimi, Sushant Sachdeva, and Qiuyi Zhang

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
We study whether a depth two neural network can learn another depth two network using gradient descent. Assuming a linear output node, we show that the question of whether gradient descent converges to the target function is equivalent to the following question in electrodynamics: Given k fixed protons in R^d, and k electrons, each moving due to the attractive force from the protons and repulsive force from the remaining electrons, whether at equilibrium all the electrons will be matched up with the protons, up to a permutation. Under the standard electrical force, this follows from the classic Earnshaw's theorem. In our setting, the force is determined by the activation function and the input distribution. Building on this equivalence, we prove the existence of an activation function such that gradient descent learns at least one of the hidden nodes in the target network. Iterating, we show that gradient descent can be used to learn the entire network one node at a time.

Cite as

Rina Panigrahy, Ali Rahimi, Sushant Sachdeva, and Qiuyi Zhang. Convergence Results for Neural Networks via Electrodynamics. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 22:1-22:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{panigrahy_et_al:LIPIcs.ITCS.2018.22,
  author =	{Panigrahy, Rina and Rahimi, Ali and Sachdeva, Sushant and Zhang, Qiuyi},
  title =	{{Convergence Results for Neural Networks via Electrodynamics}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{22:1--22:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.22},
  URN =		{urn:nbn:de:0030-drops-83521},
  doi =		{10.4230/LIPIcs.ITCS.2018.22},
  annote =	{Keywords: Deep Learning, Learning Theory, Non-convex Optimization}
}

Zhang, Jingru

Document
An O(n log n)-Time Algorithm for the k-Center Problem in Trees

Authors: Haitao Wang and Jingru Zhang

Published in: LIPIcs, Volume 99, 34th International Symposium on Computational Geometry (SoCG 2018)


Abstract
We consider a classical k-center problem in trees. Let T be a tree of n vertices and every vertex has a nonnegative weight. The problem is to find k centers on the edges of T such that the maximum weighted distance from all vertices to their closest centers is minimized. Megiddo and Tamir (SIAM J. Comput., 1983) gave an algorithm that can solve the problem in O(n log^2 n) time by using Cole's parametric search. Since then it has been open for over three decades whether the problem can be solved in O(n log n) time. In this paper, we present an O(n log n) time algorithm for the problem and thus settle the open problem affirmatively.

Cite as

Haitao Wang and Jingru Zhang. An O(n log n)-Time Algorithm for the k-Center Problem in Trees. In 34th International Symposium on Computational Geometry (SoCG 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 99, pp. 72:1-72:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{wang_et_al:LIPIcs.SoCG.2018.72,
  author =	{Wang, Haitao and Zhang, Jingru},
  title =	{{An O(n log n)-Time Algorithm for the k-Center Problem in Trees}},
  booktitle =	{34th International Symposium on Computational Geometry (SoCG 2018)},
  pages =	{72:1--72:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-066-8},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{99},
  editor =	{Speckmann, Bettina and T\'{o}th, Csaba D.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2018.72},
  URN =		{urn:nbn:de:0030-drops-87852},
  doi =		{10.4230/LIPIcs.SoCG.2018.72},
  annote =	{Keywords: k-center, trees, facility locations}
}

Zhang, Hanrui

Document
Improved Prophet Inequalities for Combinatorial Welfare Maximization with (Approximately) Subadditive Agents

Authors: Hanrui Zhang

Published in: LIPIcs, Volume 173, 28th Annual European Symposium on Algorithms (ESA 2020)


Abstract
We give a framework for designing prophet inequalities for combinatorial welfare maximization. Instantiated with different parameters, our framework implies (1) an O(log m / log log m)-competitive prophet inequality for subadditive agents, improving over the O(log m) upper bound via item pricing, (2) an O(D log m / log log m)-competitive prophet inequality for D-approximately subadditive agents, where D ∈ {1, … , m-1} measures the maximum number of items that complement each other, and (3) as a byproduct, an O(1)-competitive prophet inequality for submodular or fractionally subadditive (a.k.a. XOS) agents, matching the optimal ratio asymptotically. Our framework is computationally efficient given sample access to the prior and demand queries.

Cite as

Hanrui Zhang. Improved Prophet Inequalities for Combinatorial Welfare Maximization with (Approximately) Subadditive Agents. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 82:1-82:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{zhang:LIPIcs.ESA.2020.82,
  author =	{Zhang, Hanrui},
  title =	{{Improved Prophet Inequalities for Combinatorial Welfare Maximization with (Approximately) Subadditive Agents}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{82:1--82:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.82},
  URN =		{urn:nbn:de:0030-drops-129488},
  doi =		{10.4230/LIPIcs.ESA.2020.82},
  annote =	{Keywords: Prophet Inequalities, Combinatorial Welfare Maximization, (Approximate) Subadditivity}
}
Document
Capturing Complementarity in Set Functions by Going Beyond Submodularity/Subadditivity

Authors: Wei Chen, Shang-Hua Teng, and Hanrui Zhang

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
We introduce two new "degree of complementarity" measures: supermodular width and superadditive width. Both are formulated based on natural witnesses of complementarity. We show that both measures are robust by proving that they, respectively, characterize the gap of monotone set functions from being submodular and subadditive. Thus, they define two new hierarchies over monotone set functions, which we will refer to as Supermodular Width (SMW) hierarchy and Superadditive Width (SAW) hierarchy, with foundations - i.e. level 0 of the hierarchies - resting exactly on submodular and subadditive functions, respectively. We present a comprehensive comparative analysis of the SMW hierarchy and the Supermodular Degree (SD) hierarchy, defined by Feige and Izsak. We prove that the SMW hierarchy is strictly more expressive than the SD hierarchy: Every monotone set function of supermodular degree d has supermodular width at most d, and there exists a supermodular-width-1 function over a ground set of m elements whose supermodular degree is m-1. We show that previous results regarding approximation guarantees for welfare and constrained maximization as well as regarding the Price of Anarchy (PoA) of simple auctions can be extended without any loss from the supermodular degree to the supermodular width. We also establish almost matching information-theoretical lower bounds for these two well-studied fundamental maximization problems over set functions. The combination of these approximation and hardness results illustrate that the SMW hierarchy provides not only a natural notion of complementarity, but also an accurate characterization of "near submodularity" needed for maximization approximation. While SD and SMW hierarchies support nontrivial bounds on the PoA of simple auctions, we show that our SAW hierarchy seems to capture more intrinsic properties needed to realize the efficiency of simple auctions. So far, the SAW hierarchy provides the best dependency for the PoA of Single-bid Auction, and is nearly as competitive as the Maximum over Positive Hypergraphs (MPH) hierarchy for Simultaneous Item First Price Auction (SIA). We also provide almost tight lower bounds for the PoA of both auctions with respect to the SAW hierarchy.

Cite as

Wei Chen, Shang-Hua Teng, and Hanrui Zhang. Capturing Complementarity in Set Functions by Going Beyond Submodularity/Subadditivity. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 24:1-24:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2019.24,
  author =	{Chen, Wei and Teng, Shang-Hua and Zhang, Hanrui},
  title =	{{Capturing Complementarity in Set Functions by Going Beyond Submodularity/Subadditivity}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{24:1--24:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.24},
  URN =		{urn:nbn:de:0030-drops-101174},
  doi =		{10.4230/LIPIcs.ITCS.2019.24},
  annote =	{Keywords: set functions, measure of complementarity, submodularity, subadditivity, cardinality constrained maximization, welfare maximization, simple auctions, price of anarchy}
}
Document
An Improved Algorithm for Incremental DFS Tree in Undirected Graphs

Authors: Lijie Chen, Ran Duan, Ruosong Wang, Hanrui Zhang, and Tianyi Zhang

Published in: LIPIcs, Volume 101, 16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018)


Abstract
Depth first search (DFS) tree is one of the most well-known data structures for designing efficient graph algorithms. Given an undirected graph G=(V,E) with n vertices and m edges, the textbook algorithm takes O(n+m) time to construct a DFS tree. In this paper, we study the problem of maintaining a DFS tree when the graph is undergoing incremental updates. Formally, we show: Given an arbitrary online sequence of edge or vertex insertions, there is an algorithm that reports a DFS tree in O(n) worst case time per operation, and requires O (min {m log n, n^2}) preprocessing time. Our result improves the previous O(n log^3 n) worst case update time algorithm by Baswana et al. [Baswana et al., 2016] and the O(n log n) time by Nakamura and Sadakane [Nakamura and Sadakane, 2017], and matches the trivial Omega(n) lower bound when it is required to explicitly output a DFS tree. Our result builds on the framework introduced in the breakthrough work by Baswana et al. [Baswana et al., 2016], together with a novel use of a tree-partition lemma by Duan and Zhang [Duan and Zhang, 2016], and the celebrated fractional cascading technique by Chazelle and Guibas [Chazelle and Guibas, 1986a; Chazelle and Guibas, 1986b].

Cite as

Lijie Chen, Ran Duan, Ruosong Wang, Hanrui Zhang, and Tianyi Zhang. An Improved Algorithm for Incremental DFS Tree in Undirected Graphs. In 16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 101, pp. 16:1-16:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.SWAT.2018.16,
  author =	{Chen, Lijie and Duan, Ran and Wang, Ruosong and Zhang, Hanrui and Zhang, Tianyi},
  title =	{{An Improved Algorithm for Incremental DFS Tree in Undirected Graphs}},
  booktitle =	{16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018)},
  pages =	{16:1--16:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-068-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{101},
  editor =	{Eppstein, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2018.16},
  URN =		{urn:nbn:de:0030-drops-88427},
  doi =		{10.4230/LIPIcs.SWAT.2018.16},
  annote =	{Keywords: DFS tree, fractional cascading, fully dynamic algorithm}
}

Zhang, Tianyi

Document
Track A: Algorithms, Complexity and Games
Streaming Edge Coloring with Subquadratic Palette Size

Authors: Shiri Chechik, Doron Mukhtar, and Tianyi Zhang

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
In this paper, we study the problem of computing an edge-coloring in the (one-pass) W-streaming model. In this setting, the edges of an n-node graph arrive in an arbitrary order to a machine with a relatively small space, and the goal is to design an algorithm that outputs, as a stream, a proper coloring of the edges using the fewest possible number of colors. Behnezhad et al. [Behnezhad et al., 2019] devised the first non-trivial algorithm for this problem, which computes in Õ(n) space a proper O(Δ²)-coloring w.h.p. (here Δ is the maximum degree of the graph). Subsequent papers improved upon this result, where latest of them [Ansari et al., 2022] showed that it is possible to deterministically compute an O(Δ²/s)-coloring in O(ns) space. However, none of the improvements succeeded in reducing the number of colors to O(Δ^{2-ε}) while keeping the same space bound of Õ(n). In particular, no progress was made on the question of whether computing an O(Δ)-coloring is possible with roughly O(n) space, which was stated in [Behnezhad et al., 2019] to be an interesting open problem. In this paper we bypass the quadratic bound by presenting a new randomized Õ(n)-space algorithm that uses Õ(Δ^{1.5}) colors.

Cite as

Shiri Chechik, Doron Mukhtar, and Tianyi Zhang. Streaming Edge Coloring with Subquadratic Palette Size. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 40:1-40:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chechik_et_al:LIPIcs.ICALP.2024.40,
  author =	{Chechik, Shiri and Mukhtar, Doron and Zhang, Tianyi},
  title =	{{Streaming Edge Coloring with Subquadratic Palette Size}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{40:1--40:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.40},
  URN =		{urn:nbn:de:0030-drops-201831},
  doi =		{10.4230/LIPIcs.ICALP.2024.40},
  annote =	{Keywords: graph algorithms, streaming algorithms, edge coloring}
}
Document
Track A: Algorithms, Complexity and Games
Faster Algorithms for Dual-Failure Replacement Paths

Authors: Shiri Chechik and Tianyi Zhang

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Given a simple weighted directed graph G = (V, E, ω) on n vertices as well as two designated terminals s, t ∈ V, our goal is to compute the shortest path from s to t avoiding any pair of presumably failed edges f₁, f₂ ∈ E, which is a natural generalization of the classical replacement path problem which considers single edge failures only. This dual failure replacement paths problem was recently studied by Vassilevska Williams, Woldeghebriel and Xu [FOCS 2022] who designed a cubic time algorithm for general weighted digraphs which is conditionally optimal; in the same paper, for unweighted graphs where ω ≡ 1, the authors presented an algebraic algorithm with runtime Õ(n^{2.9146}), as well as a conditional lower bound of n^{8/3-o(1)} against combinatorial algorithms. However, it was unknown in their work whether fast matrix multiplication is necessary for a subcubic runtime in unweighted digraphs. As our primary result, we present the first truly subcubic combinatorial algorithm for dual failure replacement paths in unweighted digraphs. Our runtime is Õ(n^{3-1/18}). Besides, we also study algebraic algorithms for digraphs with small integer edge weights from {-M, -M+1, ⋯, M-1, M}. As our secondary result, we obtained a runtime of Õ(Mn^{2.8716}), which is faster than the previous bound of Õ(M^{2/3}n^{2.9144} + Mn^{2.8716}) from [Vassilevska Williams, Woldeghebriela and Xu, 2022].

Cite as

Shiri Chechik and Tianyi Zhang. Faster Algorithms for Dual-Failure Replacement Paths. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 41:1-41:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chechik_et_al:LIPIcs.ICALP.2024.41,
  author =	{Chechik, Shiri and Zhang, Tianyi},
  title =	{{Faster Algorithms for Dual-Failure Replacement Paths}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{41:1--41:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.41},
  URN =		{urn:nbn:de:0030-drops-201849},
  doi =		{10.4230/LIPIcs.ICALP.2024.41},
  annote =	{Keywords: graph algorithms, shortest paths, replacement paths}
}
Document
Track A: Algorithms, Complexity and Games
Path-Reporting Distance Oracles with Logarithmic Stretch and Linear Size

Authors: Shiri Chechik and Tianyi Zhang

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Given an undirected graph G = (V, E, 𝐰) on n vertices with positive edge weights, a distance oracle is a space-efficient data structure that answers pairwise distance queries in fast runtime. The quality of a distance oracle is measured by three parameters: space, query time, and stretch. In a landmark paper by [Thorup and Zwick, 2001], they showed that for any integer parameter k ≥ 1, there exists a distance oracle with size O(kn^{1+1/k}), O(k) query time, and (2k-1)-stretch error on the approximate distances. After that, there has been a line of subsequent improvements which culminated in the optimal trade-off of O(n^{1+1/k}) space, O(1) query time, and (2k-1)-stretch [Chechik, 2015]. However, these line of constructions did not require that the distance oracle is capable of printing an actual path besides an approximate distance estimate, and there has been a performance gap between path-reporting distance oracles and ones that are not path-reporting. It is known that the earliest construction by [Thorup and Zwick, 2001] is path-reporting, but the parameters are worse by a factor of k. In a later construction by [Wulff-Nilsen, 2013], the query time was improved from O(k) to O(log k). Better trade-offs were discovered in [Elkin and Pettie, 2015] where the authors broke the O(kn^{1+1/k}) space barrier and achieved O(n^{1+1/k}log k) space with O(log k) query time, but their stretch was blown up to a polynomial O(k^{log_{4/3}7}); they also gave an alternative choice of O(n^{1+1/k}) space which is optimal, and O(k)-stretch which is also optimal up to a constant factor, but their query time rose exponentially to O(n^ε). In a recent work [Elkin and Shabat, 2023], the authors obtained significant improvements of O(n^{1+1/k}log k) space, O(k)-stretch, and O(log log k) query time, or O(n^{1+1/k}) space, O(klog k)-stretch, and O(log log k) query time. All the above constructions of path-reporting distance oracles share a common barrier; that is, they could not achieve optimal space O(n^{1+1/k}) and stretch O(k) simultaneously within logarithmic query time; for example, in the natural regime where k = ⌈log n⌉, previous distance oracles had to pay an extra factor of log log n either in the space or stretch. As our result, we bypass this barrier by a new construction of path-reporting distance oracles with O(n^{1+1/k}) space and O(k)-stretch and O(log log k) query time.

Cite as

Shiri Chechik and Tianyi Zhang. Path-Reporting Distance Oracles with Logarithmic Stretch and Linear Size. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 42:1-42:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chechik_et_al:LIPIcs.ICALP.2024.42,
  author =	{Chechik, Shiri and Zhang, Tianyi},
  title =	{{Path-Reporting Distance Oracles with Logarithmic Stretch and Linear Size}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{42:1--42:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.42},
  URN =		{urn:nbn:de:0030-drops-201859},
  doi =		{10.4230/LIPIcs.ICALP.2024.42},
  annote =	{Keywords: graph algorithms, shortest paths, distance oracles}
}
Document
Track A: Algorithms, Complexity and Games
Faster Cut-Equivalent Trees in Simple Graphs

Authors: Tianyi Zhang

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
Let G = (V, E) be an undirected connected simple graph on n vertices. A cut-equivalent tree of G is an edge-weighted tree on the same vertex set V, such that for any pair of vertices s, t ∈ V, the minimum (s, t)-cut in the tree is also a minimum (s, t)-cut in G, and these two cuts have the same cut value. In a recent paper [Abboud, Krauthgamer and Trabelsi, STOC 2021], the authors propose the first subcubic time algorithm for constructing a cut-equivalent tree. More specifically, their algorithm has Õ(n^{2.5}) running time. Later on, this running time was significantly improved to n^{2+o(1)} by two independent works [Abboud, Krauthgamer and Trabelsi, FOCS 2021] and [Li, Panigrahi, Saranurak, FOCS 2021], and then to (m+n^{1.9})^{1+o(1)} by [Abboud, Krauthgamer and Trabelsi, SODA 2022]. In this paper, we improve the running time to Õ(n²) graphs if near-linear time max-flow algorithms exist, or Õ(n^{17/8}) using the currently fastest max-flow algorithm. Although our algorithm is slower than previous works, the runtime bound becomes better by a sub-polynomial factor in dense simple graphs when assuming near-linear time max-flow algorithms.

Cite as

Tianyi Zhang. Faster Cut-Equivalent Trees in Simple Graphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 109:1-109:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{zhang:LIPIcs.ICALP.2022.109,
  author =	{Zhang, Tianyi},
  title =	{{Faster Cut-Equivalent Trees in Simple Graphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{109:1--109:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.109},
  URN =		{urn:nbn:de:0030-drops-164507},
  doi =		{10.4230/LIPIcs.ICALP.2022.109},
  annote =	{Keywords: graph algorithms, minimum cuts, max-flow}
}
Document
Track A: Algorithms, Complexity and Games
Deterministic Maximum Flows in Simple Graphs

Authors: Tianyi Zhang

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
In this paper we are interested in deterministically computing maximum flows in undirected simple graphs where edges have unit capacities. When the input graph has n vertices and m edges, and the maximum flow is known to be upper bounded by τ as prior knowledge, our algorithm has running time Õ(m + n^{5/3}τ^{1/2}); in the extreme case where τ = Θ(n), our algorithm has running time Õ(n^{2.17}). This always improves upon the previous best deterministic upper bound Õ(n^{9/4}τ^{1/8}) by [Duan, 2013]. Furthermore, when τ ≥ n^{0.67} our algorithm is faster than a classical upper bound of O(m + nτ^{3/2}) by [Karger and Levin, 1998].

Cite as

Tianyi Zhang. Deterministic Maximum Flows in Simple Graphs. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 114:1-114:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{zhang:LIPIcs.ICALP.2021.114,
  author =	{Zhang, Tianyi},
  title =	{{Deterministic Maximum Flows in Simple Graphs}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{114:1--114:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.114},
  URN =		{urn:nbn:de:0030-drops-141832},
  doi =		{10.4230/LIPIcs.ICALP.2021.114},
  annote =	{Keywords: graph algorithms, maximum flows, dynamic data structures}
}
Document
Track A: Algorithms, Complexity and Games
A Scaling Algorithm for Weighted f-Factors in General Graphs

Authors: Ran Duan, Haoqing He, and Tianyi Zhang

Published in: LIPIcs, Volume 168, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)


Abstract
We study the maximum weight perfect f-factor problem on any general simple graph G = (V,E,ω) with positive integral edge weights w, and n = |V|, m = |E|. When we have a function f:V → ℕ_+ on vertices, a perfect f-factor is a generalized matching so that every vertex u is matched to exactly f(u) different edges. The previous best results on this problem have running time O(m f(V)) [Gabow 2018] or Õ(W(f(V))^2.373)) [Gabow and Sankowski 2013], where W is the maximum edge weight, and f(V) = ∑_{u ∈ V}f(u). In this paper, we present a scaling algorithm for this problem with running time Õ(mn^{2/3} log W). Previously this bound is only known for bipartite graphs [Gabow and Tarjan 1989]. The advantage is that the running time is independent of f(V), and consequently it breaks the Ω(mn) barrier for large f(V) even for the unweighted f-factor problem in general graphs.

Cite as

Ran Duan, Haoqing He, and Tianyi Zhang. A Scaling Algorithm for Weighted f-Factors in General Graphs. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 41:1-41:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{duan_et_al:LIPIcs.ICALP.2020.41,
  author =	{Duan, Ran and He, Haoqing and Zhang, Tianyi},
  title =	{{A Scaling Algorithm for Weighted f-Factors in General Graphs}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{41:1--41:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.41},
  URN =		{urn:nbn:de:0030-drops-124487},
  doi =		{10.4230/LIPIcs.ICALP.2020.41},
  annote =	{Keywords: Scaling Algorithm, f-Factors, General Graphs}
}
Document
An Improved Algorithm for Incremental DFS Tree in Undirected Graphs

Authors: Lijie Chen, Ran Duan, Ruosong Wang, Hanrui Zhang, and Tianyi Zhang

Published in: LIPIcs, Volume 101, 16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018)


Abstract
Depth first search (DFS) tree is one of the most well-known data structures for designing efficient graph algorithms. Given an undirected graph G=(V,E) with n vertices and m edges, the textbook algorithm takes O(n+m) time to construct a DFS tree. In this paper, we study the problem of maintaining a DFS tree when the graph is undergoing incremental updates. Formally, we show: Given an arbitrary online sequence of edge or vertex insertions, there is an algorithm that reports a DFS tree in O(n) worst case time per operation, and requires O (min {m log n, n^2}) preprocessing time. Our result improves the previous O(n log^3 n) worst case update time algorithm by Baswana et al. [Baswana et al., 2016] and the O(n log n) time by Nakamura and Sadakane [Nakamura and Sadakane, 2017], and matches the trivial Omega(n) lower bound when it is required to explicitly output a DFS tree. Our result builds on the framework introduced in the breakthrough work by Baswana et al. [Baswana et al., 2016], together with a novel use of a tree-partition lemma by Duan and Zhang [Duan and Zhang, 2016], and the celebrated fractional cascading technique by Chazelle and Guibas [Chazelle and Guibas, 1986a; Chazelle and Guibas, 1986b].

Cite as

Lijie Chen, Ran Duan, Ruosong Wang, Hanrui Zhang, and Tianyi Zhang. An Improved Algorithm for Incremental DFS Tree in Undirected Graphs. In 16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 101, pp. 16:1-16:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.SWAT.2018.16,
  author =	{Chen, Lijie and Duan, Ran and Wang, Ruosong and Zhang, Hanrui and Zhang, Tianyi},
  title =	{{An Improved Algorithm for Incremental DFS Tree in Undirected Graphs}},
  booktitle =	{16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018)},
  pages =	{16:1--16:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-068-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{101},
  editor =	{Eppstein, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2018.16},
  URN =		{urn:nbn:de:0030-drops-88427},
  doi =		{10.4230/LIPIcs.SWAT.2018.16},
  annote =	{Keywords: DFS tree, fractional cascading, fully dynamic algorithm}
}

Zhang, Jiapeng

Document
Fractional Certificates for Bounded Functions

Authors: Shachar Lovett and Jiapeng Zhang

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
A folklore conjecture in quantum computing is that the acceptance probability of a quantum query algorithm can be approximated by a classical decision tree, with only a polynomial increase in the number of queries. Motivated by this conjecture, Aaronson and Ambainis (Theory of Computing, 2014) conjectured that this should hold more generally for any bounded function computed by a low degree polynomial. In this work we prove two new results towards establishing this conjecture: first, that any such polynomial has a small fractional certificate complexity; and second, that many inputs have a small sensitive block. We show that these would imply the Aaronson and Ambainis conjecture, assuming a conjectured extension of Talagrand’s concentration inequality. On the technical side, many classical techniques used in the analysis of Boolean functions seem to fail when applied to bounded functions. Here, we develop a new technique, based on a mix of combinatorics, analysis and geometry, and which in part extends a recent technique of Knop et al. (STOC 2021) to bounded functions.

Cite as

Shachar Lovett and Jiapeng Zhang. Fractional Certificates for Bounded Functions. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 84:1-84:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{lovett_et_al:LIPIcs.ITCS.2023.84,
  author =	{Lovett, Shachar and Zhang, Jiapeng},
  title =	{{Fractional Certificates for Bounded Functions}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{84:1--84:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.84},
  URN =		{urn:nbn:de:0030-drops-175871},
  doi =		{10.4230/LIPIcs.ITCS.2023.84},
  annote =	{Keywords: Aaronson-Ambainis conjecture, fractional block sensitivity, Talagrand inequality}
}
Document
Lifting with Sunflowers

Authors: Shachar Lovett, Raghu Meka, Ian Mertz, Toniann Pitassi, and Jiapeng Zhang

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
Query-to-communication lifting theorems translate lower bounds on query complexity to lower bounds for the corresponding communication model. In this paper, we give a simplified proof of deterministic lifting (in both the tree-like and dag-like settings). Our proof uses elementary counting together with a novel connection to the sunflower lemma. In addition to a simplified proof, our approach opens up a new avenue of attack towards proving lifting theorems with improved gadget size - one of the main challenges in the area. Focusing on one of the most widely used gadgets - the index gadget - existing lifting techniques are known to require at least a quadratic gadget size. Our new approach combined with robust sunflower lemmas allows us to reduce the gadget size to near linear. We conjecture that it can be further improved to polylogarithmic, similar to the known bounds for the corresponding robust sunflower lemmas.

Cite as

Shachar Lovett, Raghu Meka, Ian Mertz, Toniann Pitassi, and Jiapeng Zhang. Lifting with Sunflowers. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 104:1-104:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{lovett_et_al:LIPIcs.ITCS.2022.104,
  author =	{Lovett, Shachar and Meka, Raghu and Mertz, Ian and Pitassi, Toniann and Zhang, Jiapeng},
  title =	{{Lifting with Sunflowers}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{104:1--104:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.104},
  URN =		{urn:nbn:de:0030-drops-157004},
  doi =		{10.4230/LIPIcs.ITCS.2022.104},
  annote =	{Keywords: Lifting theorems, communication complexity, combinatorics, sunflowers}
}
Document
From DNF Compression to Sunflower Theorems via Regularity

Authors: Shachar Lovett, Noam Solomon, and Jiapeng Zhang

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
The sunflower conjecture is one of the most well-known open problems in combinatorics. It has several applications in theoretical computer science, one of which is DNF compression, due to Gopalan, Meka and Reingold (Computational Complexity, 2013). In this paper, we show that improved bounds for DNF compression imply improved bounds for the sunflower conjecture, which is the reverse direction of the DNF compression result. The main approach is based on regularity of set systems and a structure-vs-pseudorandomness approach to the sunflower conjecture.

Cite as

Shachar Lovett, Noam Solomon, and Jiapeng Zhang. From DNF Compression to Sunflower Theorems via Regularity. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 5:1-5:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{lovett_et_al:LIPIcs.CCC.2019.5,
  author =	{Lovett, Shachar and Solomon, Noam and Zhang, Jiapeng},
  title =	{{From DNF Compression to Sunflower Theorems via Regularity}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{5:1--5:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.5},
  URN =		{urn:nbn:de:0030-drops-108277},
  doi =		{10.4230/LIPIcs.CCC.2019.5},
  annote =	{Keywords: DNF sparsification, sunflower conjecture, regular set systems}
}
Document
Sunflowers and Quasi-Sunflowers from Randomness Extractors

Authors: Xin Li, Shachar Lovett, and Jiapeng Zhang

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
The Erdös-Rado sunflower theorem (Journal of Lond. Math. Soc. 1960) is a fundamental result in combinatorics, and the corresponding sunflower conjecture is a central open problem. Motivated by applications in complexity theory, Rossman (FOCS 2010) extended the result to quasi-sunflowers, where similar conjectures emerge about the optimal parameters for which it holds. In this work, we exhibit a surprising connection between the existence of sunflowers and quasi-sunflowers in large enough set systems, and the problem of constructing (or existing) certain randomness extractors. This allows us to re-derive the known results in a systematic manner, and to reduce the relevant conjectures to the problem of obtaining improved constructions of the randomness extractors.

Cite as

Xin Li, Shachar Lovett, and Jiapeng Zhang. Sunflowers and Quasi-Sunflowers from Randomness Extractors. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 51:1-51:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.APPROX-RANDOM.2018.51,
  author =	{Li, Xin and Lovett, Shachar and Zhang, Jiapeng},
  title =	{{Sunflowers and Quasi-Sunflowers from Randomness Extractors}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{51:1--51:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.51},
  URN =		{urn:nbn:de:0030-drops-94555},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.51},
  annote =	{Keywords: Sunflower conjecture, Quasi-sunflowers, Randomness Extractors}
}
Document
A Tight Lower Bound for Entropy Flattening

Authors: Yi-Hsiu Chen, Mika Göös, Salil P. Vadhan, and Jiapeng Zhang

Published in: LIPIcs, Volume 102, 33rd Computational Complexity Conference (CCC 2018)


Abstract
We study entropy flattening: Given a circuit C_X implicitly describing an n-bit source X (namely, X is the output of C_X on a uniform random input), construct another circuit C_Y describing a source Y such that (1) source Y is nearly flat (uniform on its support), and (2) the Shannon entropy of Y is monotonically related to that of X. The standard solution is to have C_Y evaluate C_X altogether Theta(n^2) times on independent inputs and concatenate the results (correctness follows from the asymptotic equipartition property). In this paper, we show that this is optimal among black-box constructions: Any circuit C_Y for entropy flattening that repeatedly queries C_X as an oracle requires Omega(n^2) queries. Entropy flattening is a component used in the constructions of pseudorandom generators and other cryptographic primitives from one-way functions [Johan Håstad et al., 1999; John Rompel, 1990; Thomas Holenstein, 2006; Iftach Haitner et al., 2006; Iftach Haitner et al., 2009; Iftach Haitner et al., 2013; Iftach Haitner et al., 2010; Salil P. Vadhan and Colin Jia Zheng, 2012]. It is also used in reductions between problems complete for statistical zero-knowledge [Tatsuaki Okamoto, 2000; Amit Sahai and Salil P. Vadhan, 1997; Oded Goldreich et al., 1999; Vadhan, 1999]. The Theta(n^2) query complexity is often the main efficiency bottleneck. Our lower bound can be viewed as a step towards proving that the current best construction of pseudorandom generator from arbitrary one-way functions by Vadhan and Zheng (STOC 2012) has optimal efficiency.

Cite as

Yi-Hsiu Chen, Mika Göös, Salil P. Vadhan, and Jiapeng Zhang. A Tight Lower Bound for Entropy Flattening. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 23:1-23:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.CCC.2018.23,
  author =	{Chen, Yi-Hsiu and G\"{o}\"{o}s, Mika and Vadhan, Salil P. and Zhang, Jiapeng},
  title =	{{A Tight Lower Bound for Entropy Flattening}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{23:1--23:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.23},
  URN =		{urn:nbn:de:0030-drops-88669},
  doi =		{10.4230/LIPIcs.CCC.2018.23},
  annote =	{Keywords: Entropy, One-way function}
}

Zhang, Ping

Document
A Linear-Time Algorithm for Finding Induced Planar Subgraphs

Authors: Shixun Huang, Zhifeng Bao, J. Shane Culpepper, Ping Zhang, and Bang Zhang

Published in: LIPIcs, Volume 103, 17th International Symposium on Experimental Algorithms (SEA 2018)


Abstract
In this paper we study the problem of efficiently and effectively extracting induced planar subgraphs. Edwards and Farr proposed an algorithm with O(mn) time complexity to find an induced planar subgraph of at least 3n/(d+1) vertices in a graph of maximum degree d. They also proposed an alternative algorithm with O(mn) time complexity to find an induced planar subgraph graph of at least 3n/(bar{d}+1) vertices, where bar{d} is the average degree of the graph. These two methods appear to be best known when d and bar{d} are small. Unfortunately, they sacrifice accuracy for lower time complexity by using indirect indicators of planarity. A limitation of those approaches is that the algorithms do not implicitly test for planarity, and the additional costs of this test can be significant in large graphs. In contrast, we propose a linear-time algorithm that finds an induced planar subgraph of n-nu vertices in a graph of n vertices, where nu denotes the total number of vertices shared by the detected Kuratowski subdivisions. An added benefit of our approach is that we are able to detect when a graph is planar, and terminate the reduction. The resulting planar subgraphs also do not have any rigid constraints on the maximum degree of the induced subgraph. The experiment results show that our method achieves better performance than current methods on graphs with small skewness.

Cite as

Shixun Huang, Zhifeng Bao, J. Shane Culpepper, Ping Zhang, and Bang Zhang. A Linear-Time Algorithm for Finding Induced Planar Subgraphs. In 17th International Symposium on Experimental Algorithms (SEA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 103, pp. 23:1-23:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.SEA.2018.23,
  author =	{Huang, Shixun and Bao, Zhifeng and Culpepper, J. Shane and Zhang, Ping and Zhang, Bang},
  title =	{{A Linear-Time Algorithm for Finding Induced Planar Subgraphs}},
  booktitle =	{17th International Symposium on Experimental Algorithms (SEA 2018)},
  pages =	{23:1--23:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-070-5},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{103},
  editor =	{D'Angelo, Gianlorenzo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2018.23},
  URN =		{urn:nbn:de:0030-drops-89589},
  doi =		{10.4230/LIPIcs.SEA.2018.23},
  annote =	{Keywords: induced planar subgraphs, experimental analysis}
}

Zhang, Bang

Document
A Linear-Time Algorithm for Finding Induced Planar Subgraphs

Authors: Shixun Huang, Zhifeng Bao, J. Shane Culpepper, Ping Zhang, and Bang Zhang

Published in: LIPIcs, Volume 103, 17th International Symposium on Experimental Algorithms (SEA 2018)


Abstract
In this paper we study the problem of efficiently and effectively extracting induced planar subgraphs. Edwards and Farr proposed an algorithm with O(mn) time complexity to find an induced planar subgraph of at least 3n/(d+1) vertices in a graph of maximum degree d. They also proposed an alternative algorithm with O(mn) time complexity to find an induced planar subgraph graph of at least 3n/(bar{d}+1) vertices, where bar{d} is the average degree of the graph. These two methods appear to be best known when d and bar{d} are small. Unfortunately, they sacrifice accuracy for lower time complexity by using indirect indicators of planarity. A limitation of those approaches is that the algorithms do not implicitly test for planarity, and the additional costs of this test can be significant in large graphs. In contrast, we propose a linear-time algorithm that finds an induced planar subgraph of n-nu vertices in a graph of n vertices, where nu denotes the total number of vertices shared by the detected Kuratowski subdivisions. An added benefit of our approach is that we are able to detect when a graph is planar, and terminate the reduction. The resulting planar subgraphs also do not have any rigid constraints on the maximum degree of the induced subgraph. The experiment results show that our method achieves better performance than current methods on graphs with small skewness.

Cite as

Shixun Huang, Zhifeng Bao, J. Shane Culpepper, Ping Zhang, and Bang Zhang. A Linear-Time Algorithm for Finding Induced Planar Subgraphs. In 17th International Symposium on Experimental Algorithms (SEA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 103, pp. 23:1-23:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.SEA.2018.23,
  author =	{Huang, Shixun and Bao, Zhifeng and Culpepper, J. Shane and Zhang, Ping and Zhang, Bang},
  title =	{{A Linear-Time Algorithm for Finding Induced Planar Subgraphs}},
  booktitle =	{17th International Symposium on Experimental Algorithms (SEA 2018)},
  pages =	{23:1--23:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-070-5},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{103},
  editor =	{D'Angelo, Gianlorenzo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2018.23},
  URN =		{urn:nbn:de:0030-drops-89589},
  doi =		{10.4230/LIPIcs.SEA.2018.23},
  annote =	{Keywords: induced planar subgraphs, experimental analysis}
}

Zhang, Shu

Document
Can a permutation be sorted by best short swaps?

Authors: Shu Zhang, Daming Zhu, Haitao Jiang, Jingjing Ma, Jiong Guo, and Haodi Feng

Published in: LIPIcs, Volume 105, 29th Annual Symposium on Combinatorial Pattern Matching (CPM 2018)


Abstract
A short swap switches two elements with at most one element caught between them. Sorting permutation by short swaps asks to find a shortest short swap sequence to transform a permutation into another. A short swap can eliminate at most three inversions. It is still open for whether a permutation can be sorted by short swaps each of which can eliminate three inversions. In this paper, we present a polynomial time algorithm to solve the problem, which can decide whether a permutation can be sorted by short swaps each of which can eliminate 3 inversions in O(n) time, and if so, sort the permutation by such short swaps in O(n^2) time, where n is the number of elements in the permutation. A short swap can cause the total length of two element vectors to decrease by at most 4. We further propose an algorithm to recognize a permutation which can be sorted by short swaps each of which can cause the element vector length sum to decrease by 4 in O(n) time, and if so, sort the permutation by such short swaps in O(n^2) time. This improves upon the O(n^2) algorithm proposed by Heath and Vergara to decide whether a permutation is so called lucky.

Cite as

Shu Zhang, Daming Zhu, Haitao Jiang, Jingjing Ma, Jiong Guo, and Haodi Feng. Can a permutation be sorted by best short swaps?. In 29th Annual Symposium on Combinatorial Pattern Matching (CPM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 105, pp. 14:1-14:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.CPM.2018.14,
  author =	{Zhang, Shu and Zhu, Daming and Jiang, Haitao and Ma, Jingjing and Guo, Jiong and Feng, Haodi},
  title =	{{Can a permutation be sorted by best short swaps?}},
  booktitle =	{29th Annual Symposium on Combinatorial Pattern Matching (CPM 2018)},
  pages =	{14:1--14:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-074-3},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{105},
  editor =	{Navarro, Gonzalo and Sankoff, David and Zhu, Binhai},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2018.14},
  URN =		{urn:nbn:de:0030-drops-86957},
  doi =		{10.4230/LIPIcs.CPM.2018.14},
  annote =	{Keywords: Algorithm, Complexity, Short Swap, Permutation, Reversal}
}

Zhang, Le

Document
Improved Time Bounds for All Pairs Non-decreasing Paths in General Digraphs

Authors: Ran Duan, Yong Gu, and Le Zhang

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
We present improved algorithms for solving the All Pairs Non-decreasing Paths (APNP) problem on weighted digraphs. Currently, the best upper bound on APNP is O~(n^{(9+omega)/4})=O(n^{2.844}), obtained by Vassilevska Williams [TALG 2010 and SODA'08], where omega<2.373 is the usual exponent of matrix multiplication. Our first algorithm improves the time bound to O~(n^{2+omega/3})=O(n^{2.791}). The algorithm determines, for every pair of vertices s, t, the minimum last edge weight on a non-decreasing path from s to t, where a non-decreasing path is a path on which the edge weights form a non-decreasing sequence. The algorithm proposed uses the combinatorial properties of non-decreasing paths. Also a slightly improved algorithm with running time O(n^{2.78}) is presented.

Cite as

Ran Duan, Yong Gu, and Le Zhang. Improved Time Bounds for All Pairs Non-decreasing Paths in General Digraphs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 44:1-44:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{duan_et_al:LIPIcs.ICALP.2018.44,
  author =	{Duan, Ran and Gu, Yong and Zhang, Le},
  title =	{{Improved Time Bounds for All Pairs Non-decreasing Paths in General Digraphs}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{44:1--44:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.44},
  URN =		{urn:nbn:de:0030-drops-90487},
  doi =		{10.4230/LIPIcs.ICALP.2018.44},
  annote =	{Keywords: Graph algorithms, Matrix multiplication, Non-decreasing paths}
}

Zhang, Yuhao

Document
Track A: Algorithms, Complexity and Games
Algorithms for the Generalized Poset Sorting Problem

Authors: Shaofeng H.-C. Jiang, Wenqian Wang, Yubo Zhang, and Yuhao Zhang

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We consider a generalized poset sorting problem (GPS), in which we are given a query graph G = (V, E) and an unknown poset 𝒫(V, ≺) that is defined on the same vertex set V, and the goal is to make as few queries as possible to edges in G in order to fully recover 𝒫, where each query (u, v) returns the relation between u, v, i.e., u ≺ v, v ≺ u or u ̸ ∼ v. This generalizes both the poset sorting problem [Faigle et al., SICOMP 88] and the generalized sorting problem [Huang et al., FOCS 11]. We give algorithms with Õ(n poly(k)) query complexity when G is a complete bipartite graph or G is stochastic under the Erdős-Rényi model, where k is the width of the poset, and these generalize [Daskalakis et al., SICOMP 11] which only studies complete graph G. Both results are based on a unified framework that reduces the poset sorting to partitioning the vertices with respect to a given pivot element, which may be of independent interest. Moreover, we also propose novel algorithms to implement this partition oracle. Notably, we suggest a randomized BFS with vertex skipping for the stochastic G, and it yields a nearly-tight bound even for the special case of generalized sorting (for stochastic G) which is comparable to the main result of a recent work [Kuszmaul et al., FOCS 21] but is conceptually different and simplified. Our study of GPS also leads to a new Õ(n^{1 - 1 / (2W)}) competitive ratio for the so-called weighted generalized sorting problem where W is the number of distinct weights in the query graph. This problem was considered as an open question in [Charikar et al., JCSS 02], and our result makes important progress as it yields the first nontrivial sublinear ratio for general weighted query graphs (for any bounded W). We obtain this via an Õ(nk + n^{1.5}) query complexity algorithm for the case where every edge in G is guaranteed to be comparable in the poset, which generalizes a Õ(n^{1.5}) bound for generalized sorting [Huang et al., FOCS 11].

Cite as

Shaofeng H.-C. Jiang, Wenqian Wang, Yubo Zhang, and Yuhao Zhang. Algorithms for the Generalized Poset Sorting Problem. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 92:1-92:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{jiang_et_al:LIPIcs.ICALP.2024.92,
  author =	{Jiang, Shaofeng H.-C. and Wang, Wenqian and Zhang, Yubo and Zhang, Yuhao},
  title =	{{Algorithms for the Generalized Poset Sorting Problem}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{92:1--92:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.92},
  URN =		{urn:nbn:de:0030-drops-202359},
  doi =		{10.4230/LIPIcs.ICALP.2024.92},
  annote =	{Keywords: sorting, poset sorting, generalized sorting}
}
Document
On the Perturbation Function of Ranking and Balance for Weighted Online Bipartite Matching

Authors: Jingxun Liang, Zhihao Gavin Tang, Yixuan Even Xu, Yuhao Zhang, and Renfei Zhou

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
Ranking and Balance are arguably the two most important algorithms in the online matching literature. They achieve the same optimal competitive ratio of 1-1/e for the integral version and fractional version of online bipartite matching by Karp, Vazirani, and Vazirani (STOC 1990) respectively. The two algorithms have been generalized to weighted online bipartite matching problems, including vertex-weighted online bipartite matching and AdWords, by utilizing a perturbation function. The canonical choice of the perturbation function is f(x) = 1-e^{x-1} as it leads to the optimal competitive ratio of 1-1/e in both settings. We advance the understanding of the weighted generalizations of Ranking and Balance in this paper, with a focus on studying the effect of different perturbation functions. First, we prove that the canonical perturbation function is the unique optimal perturbation function for vertex-weighted online bipartite matching. In stark contrast, all perturbation functions achieve the optimal competitive ratio of 1-1/e in the unweighted setting. Second, we prove that the generalization of Ranking to AdWords with unknown budgets using the canonical perturbation function is at most 0.624 competitive, refuting a conjecture of Vazirani (2021). More generally, as an application of the first result, we prove that no perturbation function leads to the prominent competitive ratio of 1-1/e by establishing an upper bound of 1-1/e-0.0003. Finally, we propose the online budget-additive welfare maximization problem that is intermediate between AdWords and AdWords with unknown budgets, and we design an optimal 1-1/e competitive algorithm by generalizing Balance.

Cite as

Jingxun Liang, Zhihao Gavin Tang, Yixuan Even Xu, Yuhao Zhang, and Renfei Zhou. On the Perturbation Function of Ranking and Balance for Weighted Online Bipartite Matching. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 80:1-80:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{liang_et_al:LIPIcs.ESA.2023.80,
  author =	{Liang, Jingxun and Tang, Zhihao Gavin and Xu, Yixuan Even and Zhang, Yuhao and Zhou, Renfei},
  title =	{{On the Perturbation Function of Ranking and Balance for Weighted Online Bipartite Matching}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{80:1--80:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.80},
  URN =		{urn:nbn:de:0030-drops-187334},
  doi =		{10.4230/LIPIcs.ESA.2023.80},
  annote =	{Keywords: Online Matching, AdWords, Ranking, Water-Filling}
}
Document
Improved Algorithms for Online Rent Minimization Problem Under Unit-Size Jobs

Authors: Enze Sun, Zonghan Yang, and Yuhao Zhang

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
We consider the Online Rent Minimization problem, where online jobs with release times, deadlines, and processing times must be scheduled on machines that can be rented for a fixed length period of T. The objective is to minimize the number of machine rents. This problem generalizes the Online Machine Minimization problem where machines can be rented for an infinite period, and both problems have an asymptotically optimal competitive ratio of O(log(p_max/p_min)) for general processing times, where p_max and p_min are the maximum and minimum processing times respectively. However, for small values of p_max/p_min, a better competitive ratio can be achieved by assuming unit-size jobs. Under this assumption, Devanur et al. (2014) gave an optimal e-competitive algorithm for Online Machine Minimization, and Chen and Zhang (2022) gave a (3e+7) ≈ 15.16-competitive algorithm for Online Rent Minimization. In this paper, we significantly improve the competitive ratio of the Online Rent Minimization problem under unit size to 6, by using a clean oracle-based online algorithm framework.

Cite as

Enze Sun, Zonghan Yang, and Yuhao Zhang. Improved Algorithms for Online Rent Minimization Problem Under Unit-Size Jobs. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 97:1-97:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{sun_et_al:LIPIcs.ESA.2023.97,
  author =	{Sun, Enze and Yang, Zonghan and Zhang, Yuhao},
  title =	{{Improved Algorithms for Online Rent Minimization Problem Under Unit-Size Jobs}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{97:1--97:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.97},
  URN =		{urn:nbn:de:0030-drops-187500},
  doi =		{10.4230/LIPIcs.ESA.2023.97},
  annote =	{Keywords: Online Algorithm, Scheduling, Machine Minimization, Rent Minimization}
}
Document
Minimizing the Maximum Flow Time in the Online Food Delivery Problem

Authors: Xiangyu Guo, Kelin Luo, Shi Li, and Yuhao Zhang

Published in: LIPIcs, Volume 248, 33rd International Symposium on Algorithms and Computation (ISAAC 2022)


Abstract
We study a common delivery problem encountered in nowadays online food-ordering platforms: Customers order dishes online, and the restaurant delivers the food after receiving the order. Specifically, we study a problem where k vehicles of capacity c are serving a set of requests ordering food from one restaurant. After a request arrives, it can be served by a vehicle moving from the restaurant to its delivery location. We are interested in serving all requests while minimizing the maximum flow-time, i.e., the maximum time length a customer waits to receive his/her food after submitting the order. We show that the problem is hard in both offline and online settings even when k = 1 and c = ∞: There is a hardness of approximation of Ω(n) for the offline problem, and a lower bound of Ω(n) on the competitive ratio of any online algorithm, where n is number of points in the metric. We circumvent the strong negative results in two directions. Our main result is an O(1)-competitive online algorithm for the uncapacitated (i.e, c = ∞) food delivery problem on tree metrics; we also have negative result showing that the condition c = ∞ is needed. Then we explore the speed-augmentation model where our online algorithm is allowed to use vehicles with faster speed. We show that a moderate speeding factor leads to a constant competitive ratio, and we prove a tight trade-off between the speeding factor and the competitive ratio.

Cite as

Xiangyu Guo, Kelin Luo, Shi Li, and Yuhao Zhang. Minimizing the Maximum Flow Time in the Online Food Delivery Problem. In 33rd International Symposium on Algorithms and Computation (ISAAC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 248, pp. 33:1-33:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{guo_et_al:LIPIcs.ISAAC.2022.33,
  author =	{Guo, Xiangyu and Luo, Kelin and Li, Shi and Zhang, Yuhao},
  title =	{{Minimizing the Maximum Flow Time in the Online Food Delivery Problem}},
  booktitle =	{33rd International Symposium on Algorithms and Computation (ISAAC 2022)},
  pages =	{33:1--33:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-258-7},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{248},
  editor =	{Bae, Sang Won and Park, Heejin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2022.33},
  URN =		{urn:nbn:de:0030-drops-173181},
  doi =		{10.4230/LIPIcs.ISAAC.2022.33},
  annote =	{Keywords: Online algorithm, Capacitated Vehicle Routing, Flow Time Optimization}
}
Document
Track A: Algorithms, Complexity and Games
Almost Tight Approximation Hardness for Single-Source Directed k-Edge-Connectivity

Authors: Chao Liao, Qingyun Chen, Bundit Laekhanukit, and Yuhao Zhang

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
In the k-outconnected directed Steiner tree problem (k-DST), we are given an n-vertex directed graph G = (V,E) with edge costs, a connectivity requirement k, a root r ∈ V and a set of terminals T ⊆ V. The goal is to find a minimum-cost subgraph H ⊆ G that has k edge-disjoint paths from the root vertex r to every terminal t ∈ T. The problem is NP-hard, and inapproximability results are known in several parameters, e.g., hardness in terms of n: log^{2-ε}n-hardness for k = 1 [Halperin and Krauthgamer, STOC'03], 2^{log^{1-ε}n}-hardness for general case [Cheriyan, Laekhanukit, Naves and Vetta, SODA'12], hardness in terms of k [Cheriyan et al., SODA'12; Laekhanukit, SODA'14; Manurangsi, IPL'19] and hardness in terms of |T| [Laekhanukit, SODA'14]. In this paper, we show the approximation hardness of k-DST for various parameters. - Ω(|T|/log |T|)-approximation hardness, which holds under the standard complexity assumption NP≠ ZPP. The inapproximability ratio is tightened to Ω(|T|) under the Strongish Planted Clique Hypothesis [Manurangsi, Rubinstein and Schramm, ITCS 2021]. The latter hardness result matches the approximation ratio of |T| obtained by a trivial approximation algorithm, thus closing the long-standing open problem. - Ω(2^{k/2} / k)-approximation hardness for the general case of k-DST under the assumption NP≠ZPP. This is the first hardness result known for survivable network design problems with an inapproximability ratio exponential in k. - Ω((k/L)^{L/4})-approximation hardness for k-DST on L-layered graphs for L ≤ O(log n). This almost matches the approximation ratio of O(k^{L-1}⋅ L ⋅ log |T|) achieved in O(n^L)-time due to Laekhanukit [ICALP'16]. We further extend our hardness results in terms of |T| to the undirected cases of k-DST, namely the single-source k-vertex-connected Steiner tree and the k-edge-connected group Steiner tree problems. Thus, we obtain Ω(|T|/log |T|) and Ω(|T|) approximation hardness for both problems under the assumption NP≠ ZPP and the Strongish Planted Clique Hypothesis, respectively. This again matches the upper bound obtained by trivial algorithms.

Cite as

Chao Liao, Qingyun Chen, Bundit Laekhanukit, and Yuhao Zhang. Almost Tight Approximation Hardness for Single-Source Directed k-Edge-Connectivity. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 89:1-89:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{liao_et_al:LIPIcs.ICALP.2022.89,
  author =	{Liao, Chao and Chen, Qingyun and Laekhanukit, Bundit and Zhang, Yuhao},
  title =	{{Almost Tight Approximation Hardness for Single-Source Directed k-Edge-Connectivity}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{89:1--89:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.89},
  URN =		{urn:nbn:de:0030-drops-164309},
  doi =		{10.4230/LIPIcs.ICALP.2022.89},
  annote =	{Keywords: Directed Steiner Tree, Hardness of Approximation, Fault-Tolerant and Survivable Network Design}
}
Document
APPROX
Polylogarithmic Approximation Algorithm for k-Connected Directed Steiner Tree on Quasi-Bipartite Graphs

Authors: Chun-Hsiang Chan, Bundit Laekhanukit, Hao-Ting Wei, and Yuhao Zhang

Published in: LIPIcs, Volume 176, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)


Abstract
In the k-Connected Directed Steiner Tree problem (k-DST), we are given a directed graph G = (V,E) with edge (or vertex) costs, a root vertex r, a set of q terminals T, and a connectivity requirement k > 0; the goal is to find a minimum-cost subgraph H of G such that H has k edge-disjoint paths from the root r to each terminal in T. The k-DST problem is a natural generalization of the classical Directed Steiner Tree problem (DST) in the fault-tolerant setting in which the solution subgraph is required to have an r,t-path, for every terminal t, even after removing k-1 vertices or edges. Despite being a classical problem, there are not many positive results on the problem, especially for the case k ≥ 3. In this paper, we present an O(log k log q)-approximation algorithm for k-DST when an input graph is quasi-bipartite, i.e., when there is no edge joining two non-terminal vertices. To the best of our knowledge, our algorithm is the only known non-trivial approximation algorithm for k-DST, for k ≥ 3, that runs in polynomial-time Our algorithm is tight for every constant k, due to the hardness result inherited from the Set Cover problem.

Cite as

Chun-Hsiang Chan, Bundit Laekhanukit, Hao-Ting Wei, and Yuhao Zhang. Polylogarithmic Approximation Algorithm for k-Connected Directed Steiner Tree on Quasi-Bipartite Graphs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 176, pp. 63:1-63:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.APPROX/RANDOM.2020.63,
  author =	{Chan, Chun-Hsiang and Laekhanukit, Bundit and Wei, Hao-Ting and Zhang, Yuhao},
  title =	{{Polylogarithmic Approximation Algorithm for k-Connected Directed Steiner Tree on Quasi-Bipartite Graphs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)},
  pages =	{63:1--63:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-164-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{176},
  editor =	{Byrka, Jaros{\l}aw and Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2020.63},
  URN =		{urn:nbn:de:0030-drops-126667},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2020.63},
  annote =	{Keywords: Approximation Algorithms, Network Design, Directed Graphs}
}
Document
Online Makespan Minimization: The Power of Restart

Authors: Zhiyi Huang, Ning Kang, Zhihao Gavin Tang, Xiaowei Wu, and Yuhao Zhang

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
We consider the online makespan minimization problem on identical machines. Chen and Vestjens (ORL 1997) show that the largest processing time first (LPT) algorithm is 1.5-competitive. For the special case of two machines, Noga and Seiden (TCS 2001) introduce the SLEEPY algorithm that achieves a competitive ratio of (5 - sqrt{5})/2 ~~ 1.382, matching the lower bound by Chen and Vestjens (ORL 1997). Furthermore, Noga and Seiden note that in many applications one can kill a job and restart it later, and they leave an open problem whether algorithms with restart can obtain better competitive ratios. We resolve this long-standing open problem on the positive end. Our algorithm has a natural rule for killing a processing job: a newly-arrived job replaces the smallest processing job if 1) the new job is larger than other pending jobs, 2) the new job is much larger than the processing one, and 3) the processed portion is small relative to the size of the new job. With appropriate choice of parameters, we show that our algorithm improves the 1.5 competitive ratio for the general case, and the 1.382 competitive ratio for the two-machine case.

Cite as

Zhiyi Huang, Ning Kang, Zhihao Gavin Tang, Xiaowei Wu, and Yuhao Zhang. Online Makespan Minimization: The Power of Restart. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 14:1-14:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.APPROX-RANDOM.2018.14,
  author =	{Huang, Zhiyi and Kang, Ning and Tang, Zhihao Gavin and Wu, Xiaowei and Zhang, Yuhao},
  title =	{{Online Makespan Minimization: The Power of Restart}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{14:1--14:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.14},
  URN =		{urn:nbn:de:0030-drops-94182},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.14},
  annote =	{Keywords: Online Scheduling, Makespan Minimization, Identical Machines}
}
Document
Online Vertex-Weighted Bipartite Matching: Beating 1-1/e with Random Arrivals

Authors: Zhiyi Huang, Zhihao Gavin Tang, Xiaowei Wu, and Yuhao Zhang

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
We introduce a weighted version of the ranking algorithm by Karp et al. (STOC 1990), and prove a competitive ratio of 0.6534 for the vertex-weighted online bipartite matching problem when online vertices arrive in random order. Our result shows that random arrivals help beating the 1-1/e barrier even in the vertex-weighted case. We build on the randomized primal-dual framework by Devanur et al. (SODA 2013) and design a two dimensional gain sharing function, which depends not only on the rank of the offline vertex, but also on the arrival time of the online vertex. To our knowledge, this is the first competitive ratio strictly larger than 1-1/e for an online bipartite matching problem achieved under the randomized primal-dual framework. Our algorithm has a natural interpretation that offline vertices offer a larger portion of their weights to the online vertices as time goes by, and each online vertex matches the neighbor with the highest offer at its arrival.

Cite as

Zhiyi Huang, Zhihao Gavin Tang, Xiaowei Wu, and Yuhao Zhang. Online Vertex-Weighted Bipartite Matching: Beating 1-1/e with Random Arrivals. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 79:1-79:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.ICALP.2018.79,
  author =	{Huang, Zhiyi and Tang, Zhihao Gavin and Wu, Xiaowei and Zhang, Yuhao},
  title =	{{Online Vertex-Weighted Bipartite Matching: Beating 1-1/e with Random Arrivals}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{79:1--79:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.79},
  URN =		{urn:nbn:de:0030-drops-90830},
  doi =		{10.4230/LIPIcs.ICALP.2018.79},
  annote =	{Keywords: Vertex Weighted, Online Bipartite Matching, Randomized Primal-Dual}
}

Zhang, Lingming

Document
Learning to Accelerate Symbolic Execution via Code Transformation

Authors: Junjie Chen, Wenxiang Hu, Lingming Zhang, Dan Hao, Sarfraz Khurshid, and Lu Zhang

Published in: LIPIcs, Volume 109, 32nd European Conference on Object-Oriented Programming (ECOOP 2018)


Abstract
Symbolic execution is an effective but expensive technique for automated test generation. Over the years, a large number of refined symbolic execution techniques have been proposed to improve its efficiency. However, the symbolic execution efficiency problem remains, and largely limits the application of symbolic execution in practice. Orthogonal to refined symbolic execution, in this paper we propose to accelerate symbolic execution through semantic-preserving code transformation on the target programs. During the initial stage of this direction, we adopt a particular code transformation, compiler optimization, which is initially proposed to accelerate program concrete execution by transforming the source program into another semantic-preserving target program with increased efficiency (e.g., faster or smaller). However, compiler optimizations are mostly designed to accelerate program concrete execution rather than symbolic execution. Recent work also reported that unified settings on compiler optimizations that can accelerate symbolic execution for any program do not exist at all. Therefore, in this work we propose a machine-learning based approach to tuning compiler optimizations to accelerate symbolic execution, whose results may also aid further design of specific code transformations for symbolic execution. In particular, the proposed approach LEO separates source-code functions and libraries through our program-splitter, and predicts individual compiler optimization (i.e., whether a type of code transformation is chosen) separately through analyzing the performance of existing symbolic execution. Finally, LEO applies symbolic execution on the code transformed by compiler optimization (through our local-optimizer). We conduct an empirical study on GNU Coreutils programs using the KLEE symbolic execution engine. The results show that LEO significantly accelerates symbolic execution, outperforming the default KLEE configurations (i.e., turning on/off all compiler optimizations) in various settings, e.g., with the default training/testing time, LEO achieves the highest line coverage in 50/68 programs, and its average improvement rate on all programs is 46.48%/88.92% in terms of line coverage compared with turning on/off all compiler optimizations.

Cite as

Junjie Chen, Wenxiang Hu, Lingming Zhang, Dan Hao, Sarfraz Khurshid, and Lu Zhang. Learning to Accelerate Symbolic Execution via Code Transformation. In 32nd European Conference on Object-Oriented Programming (ECOOP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 109, pp. 6:1-6:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ECOOP.2018.6,
  author =	{Chen, Junjie and Hu, Wenxiang and Zhang, Lingming and Hao, Dan and Khurshid, Sarfraz and Zhang, Lu},
  title =	{{Learning to Accelerate Symbolic Execution via Code Transformation}},
  booktitle =	{32nd European Conference on Object-Oriented Programming (ECOOP 2018)},
  pages =	{6:1--6:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-079-8},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{109},
  editor =	{Millstein, Todd},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2018.6},
  URN =		{urn:nbn:de:0030-drops-92115},
  doi =		{10.4230/LIPIcs.ECOOP.2018.6},
  annote =	{Keywords: Symbolic Execution, Code Transformation, Machine Learning}
}

Zhang, Haoyuan

Document
FHJ: A Formal Model for Hierarchical Dispatching and Overriding

Authors: Yanlin Wang, Haoyuan Zhang, Bruno C. d. S. Oliveira, and Marco Servetto

Published in: LIPIcs, Volume 109, 32nd European Conference on Object-Oriented Programming (ECOOP 2018)


Abstract
Multiple inheritance is a valuable feature for Object-Oriented Programming. However, it is also tricky to get right, as illustrated by the extensive literature on the topic. A key issue is the ambiguity arising from inheriting multiple parents, which can have conflicting methods. Numerous existing work provides solutions for conflicts which arise from diamond inheritance: i.e. conflicts that arise from implementations sharing a common ancestor. However, most mechanisms are inadequate to deal with unintentional method conflicts: conflicts which arise from two unrelated methods that happen to share the same name and signature. This paper presents a new model called Featherweight Hierarchical Java (FHJ) that deals with unintentional method conflicts. In our new model, which is partly inspired by C++, conflicting methods arising from unrelated methods can coexist in the same class, and hierarchical dispatching supports unambiguous lookups in the presence of such conflicting methods. To avoid ambiguity, hierarchical information is employed in method dispatching, which uses a combination of static and dynamic type information to choose the implementation of a method at run-time. Furthermore, unlike all existing inheritance models, our model supports hierarchical method overriding: that is, methods can be independently overridden along the multiple inheritance hierarchy. We give illustrative examples of our language and features and formalize FHJ as a minimal Featherweight-Java style calculus.

Cite as

Yanlin Wang, Haoyuan Zhang, Bruno C. d. S. Oliveira, and Marco Servetto. FHJ: A Formal Model for Hierarchical Dispatching and Overriding. In 32nd European Conference on Object-Oriented Programming (ECOOP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 109, pp. 20:1-20:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{wang_et_al:LIPIcs.ECOOP.2018.20,
  author =	{Wang, Yanlin and Zhang, Haoyuan and Oliveira, Bruno C. d. S. and Servetto, Marco},
  title =	{{FHJ: A Formal Model for Hierarchical Dispatching and Overriding}},
  booktitle =	{32nd European Conference on Object-Oriented Programming (ECOOP 2018)},
  pages =	{20:1--20:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-079-8},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{109},
  editor =	{Millstein, Todd},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2018.20},
  URN =		{urn:nbn:de:0030-drops-92259},
  doi =		{10.4230/LIPIcs.ECOOP.2018.20},
  annote =	{Keywords: multiple inheritance, hierarchical dispatching, OOP, language design}
}

Zhang, Qiuyi (Richard)

Document
New Absolute Fast Converging Phylogeny Estimation Methods with Improved Scalability and Accuracy

Authors: Qiuyi (Richard) Zhang, Satish Rao, and Tandy Warnow

Published in: LIPIcs, Volume 113, 18th International Workshop on Algorithms in Bioinformatics (WABI 2018)


Abstract
Absolute fast converging (AFC) phylogeny estimation methods are ones that have been proven to recover the true tree with high probability given sequences whose lengths are polynomial in the number of number of leaves in the tree (once the shortest and longest branch lengths are fixed). While there has been a large literature on AFC methods, the best in terms of empirical performance was DCM_NJ, published in SODA 2001. The main empirical advantage of DCM_NJ over other AFC methods is its use of neighbor joining (NJ) to construct trees on smaller taxon subsets, which are then combined into a tree on the full set of species using a supertree method; in contrast, the other AFC methods in essence depend on quartet trees that are computed independently of each other, which reduces accuracy compared to neighbor joining. However, DCM_NJ is unlikely to scale to large datasets due to its reliance on supertree methods, as no current supertree methods are able to scale to large datasets with high accuracy. In this study we present a new approach to large-scale phylogeny estimation that shares some of the features of DCM_NJ but bypasses the use of supertree methods. We prove that this new approach is AFC and uses polynomial time. Furthermore, we describe variations on this basic approach that can be used with leaf-disjoint constraint trees (computed using methods such as maximum likelihood) to produce other AFC methods that are likely to provide even better accuracy. Thus, we present a new generalizable technique for large-scale tree estimation that is designed to improve scalability for phylogeny estimation methods to ultra-large datasets, and that can be used in a variety of settings (including tree estimation from unaligned sequences, and species tree estimation from gene trees).

Cite as

Qiuyi (Richard) Zhang, Satish Rao, and Tandy Warnow. New Absolute Fast Converging Phylogeny Estimation Methods with Improved Scalability and Accuracy. In 18th International Workshop on Algorithms in Bioinformatics (WABI 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 113, pp. 8:1-8:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.WABI.2018.8,
  author =	{Zhang, Qiuyi (Richard) and Rao, Satish and Warnow, Tandy},
  title =	{{New Absolute Fast Converging Phylogeny Estimation Methods with Improved Scalability and Accuracy}},
  booktitle =	{18th International Workshop on Algorithms in Bioinformatics (WABI 2018)},
  pages =	{8:1--8:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-082-8},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{113},
  editor =	{Parida, Laxmi and Ukkonen, Esko},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2018.8},
  URN =		{urn:nbn:de:0030-drops-93108},
  doi =		{10.4230/LIPIcs.WABI.2018.8},
  annote =	{Keywords: phylogeny estimation, short quartets, sample complexity, absolute fast converging methods, neighbor joining, maximum likelihood}
}

Zhang, Chuanrong

Document
Short Paper
Scalable Spatial Join for WFS Clients (Short Paper)

Authors: Tian Zhao, Chuanrong Zhang, and Zhijie Zhang

Published in: LIPIcs, Volume 114, 10th International Conference on Geographic Information Science (GIScience 2018)


Abstract
Web Feature Service (WFS) is a popular Web service for geospatial data, which is represented as sets of features that can be queried using the GetFeature request protocol. However, queries involving spatial joins are not efficiently supported by WFS server implementations such as GeoServer. Performing spatial join at client side is unfortunately expensive and not scalable. In this paper, we propose a simple and yet scalable strategy for performing spatial joins at client side after querying WFS data. Our approach is based on the fact that Web clients of WFS data are often used for query-based visual exploration. In visual exploration, the queried spatial objects can be filtered for a particular zoom level and spatial extent and be simplified before spatial join and still serve their purpose. This way, we can drastically reduce the number of spatial objects retrieved from WFS servers and reduce the computation cost of spatial join, so that even a simple plane-sweep algorithm can yield acceptable performance for interactive applications.

Cite as

Tian Zhao, Chuanrong Zhang, and Zhijie Zhang. Scalable Spatial Join for WFS Clients (Short Paper). In 10th International Conference on Geographic Information Science (GIScience 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 114, pp. 72:1-72:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{zhao_et_al:LIPIcs.GISCIENCE.2018.72,
  author =	{Zhao, Tian and Zhang, Chuanrong and Zhang, Zhijie},
  title =	{{Scalable Spatial Join for WFS Clients}},
  booktitle =	{10th International Conference on Geographic Information Science (GIScience 2018)},
  pages =	{72:1--72:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-083-5},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{114},
  editor =	{Winter, Stephan and Griffin, Amy and Sester, Monika},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GISCIENCE.2018.72},
  URN =		{urn:nbn:de:0030-drops-94007},
  doi =		{10.4230/LIPIcs.GISCIENCE.2018.72},
  annote =	{Keywords: WFS, SPARQL, Spatial Join}
}

Zhang, Zhijie

Document
Simple Deterministic Approximation for Submodular Multiple Knapsack Problem

Authors: Xiaoming Sun, Jialin Zhang, and Zhijie Zhang

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
Submodular maximization has been a central topic in theoretical computer science and combinatorial optimization over the last decades. Plenty of well-performed approximation algorithms have been designed for the problem over a variety of constraints. In this paper, we consider the submodular multiple knapsack problem (SMKP). In SMKP, the profits of each subset of elements are specified by a monotone submodular function. The goal is to find a feasible packing of elements over multiple bins (knapsacks) to maximize the profit. Recently, Fairstein et al. [ESA20] proposed a nearly optimal (1-e^{-1}-ε)-approximation algorithm for SMKP. Their algorithm is obtained by combining configuration LP, a grouping technique for bin packing, and the continuous greedy algorithm for submodular maximization. As a result, the algorithm is somewhat sophisticated and inherently randomized. In this paper, we present an arguably simple deterministic combinatorial algorithm for SMKP, which achieves a (1-e^{-1}-ε)-approximation ratio. Our algorithm is based on very different ideas compared with Fairstein et al. [ESA20].

Cite as

Xiaoming Sun, Jialin Zhang, and Zhijie Zhang. Simple Deterministic Approximation for Submodular Multiple Knapsack Problem. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 98:1-98:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{sun_et_al:LIPIcs.ESA.2023.98,
  author =	{Sun, Xiaoming and Zhang, Jialin and Zhang, Zhijie},
  title =	{{Simple Deterministic Approximation for Submodular Multiple Knapsack Problem}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{98:1--98:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.98},
  URN =		{urn:nbn:de:0030-drops-187517},
  doi =		{10.4230/LIPIcs.ESA.2023.98},
  annote =	{Keywords: Submodular maximization, knapsack problem, deterministic algorithm}
}
Document
Short Paper
Scalable Spatial Join for WFS Clients (Short Paper)

Authors: Tian Zhao, Chuanrong Zhang, and Zhijie Zhang

Published in: LIPIcs, Volume 114, 10th International Conference on Geographic Information Science (GIScience 2018)


Abstract
Web Feature Service (WFS) is a popular Web service for geospatial data, which is represented as sets of features that can be queried using the GetFeature request protocol. However, queries involving spatial joins are not efficiently supported by WFS server implementations such as GeoServer. Performing spatial join at client side is unfortunately expensive and not scalable. In this paper, we propose a simple and yet scalable strategy for performing spatial joins at client side after querying WFS data. Our approach is based on the fact that Web clients of WFS data are often used for query-based visual exploration. In visual exploration, the queried spatial objects can be filtered for a particular zoom level and spatial extent and be simplified before spatial join and still serve their purpose. This way, we can drastically reduce the number of spatial objects retrieved from WFS servers and reduce the computation cost of spatial join, so that even a simple plane-sweep algorithm can yield acceptable performance for interactive applications.

Cite as

Tian Zhao, Chuanrong Zhang, and Zhijie Zhang. Scalable Spatial Join for WFS Clients (Short Paper). In 10th International Conference on Geographic Information Science (GIScience 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 114, pp. 72:1-72:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{zhao_et_al:LIPIcs.GISCIENCE.2018.72,
  author =	{Zhao, Tian and Zhang, Chuanrong and Zhang, Zhijie},
  title =	{{Scalable Spatial Join for WFS Clients}},
  booktitle =	{10th International Conference on Geographic Information Science (GIScience 2018)},
  pages =	{72:1--72:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-083-5},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{114},
  editor =	{Winter, Stephan and Griffin, Amy and Sester, Monika},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GISCIENCE.2018.72},
  URN =		{urn:nbn:de:0030-drops-94007},
  doi =		{10.4230/LIPIcs.GISCIENCE.2018.72},
  annote =	{Keywords: WFS, SPARQL, Spatial Join}
}

Zhang, Haozhe

Document
Conjunctive Queries with Free Access Patterns Under Updates

Authors: Ahmet Kara, Milos Nikolic, Dan Olteanu, and Haozhe Zhang

Published in: LIPIcs, Volume 255, 26th International Conference on Database Theory (ICDT 2023)


Abstract
We study the problem of answering conjunctive queries with free access patterns under updates. A free access pattern is a partition of the free variables of the query into input and output. The query returns tuples over the output variables given a tuple of values over the input variables. We introduce a fully dynamic evaluation approach for such queries. We also give a syntactic characterisation of those queries that admit constant time per single-tuple update and whose output tuples can be enumerated with constant delay given an input tuple. Finally, we chart the complexity trade-off between the preprocessing time, update time and enumeration delay for such queries. For a class of queries, our approach achieves optimal, albeit non-constant, update time and delay. Their optimality is predicated on the Online Matrix-Vector Multiplication conjecture. Our results recover prior work on the dynamic evaluation of conjunctive queries without access patterns.

Cite as

Ahmet Kara, Milos Nikolic, Dan Olteanu, and Haozhe Zhang. Conjunctive Queries with Free Access Patterns Under Updates. In 26th International Conference on Database Theory (ICDT 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 255, pp. 17:1-17:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kara_et_al:LIPIcs.ICDT.2023.17,
  author =	{Kara, Ahmet and Nikolic, Milos and Olteanu, Dan and Zhang, Haozhe},
  title =	{{Conjunctive Queries with Free Access Patterns Under Updates}},
  booktitle =	{26th International Conference on Database Theory (ICDT 2023)},
  pages =	{17:1--17:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-270-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{255},
  editor =	{Geerts, Floris and Vandevoort, Brecht},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICDT.2023.17},
  URN =		{urn:nbn:de:0030-drops-177599},
  doi =		{10.4230/LIPIcs.ICDT.2023.17},
  annote =	{Keywords: fully dynamic algorithm, enumeration delay, complexity trade-off, dichotomy}
}
Document
Evaluation Trade-Offs for Acyclic Conjunctive Queries

Authors: Ahmet Kara, Milos Nikolic, Dan Olteanu, and Haozhe Zhang

Published in: LIPIcs, Volume 252, 31st EACSL Annual Conference on Computer Science Logic (CSL 2023)


Abstract
We consider the evaluation of acyclic conjunctive queries, where the evaluation time is decomposed into preprocessing time and enumeration delay. In a seminal paper at CSL'07, Bagan, Durand, and Grandjean showed that acyclic queries can be evaluated with linear preprocessing time and linear enumeration delay. If the query is free-connex, the enumeration delay becomes constant. Further prior work showed that constant enumeration delay can be achieved for arbitrary acyclic conjunctive queries at the expense of a preprocessing time that is characterised by the fractional hypertree width. We introduce an approach that exposes a trade-off between preprocessing time and enumeration delay for acyclic conjunctive queries. The aforementioned prior works represent extremes in this trade-off space. Yet our approach also allows for the enumeration delay and the preprocessing time between these extremes, in particular the delay may lie between constant and linear time. Our approach decomposes the given query into subqueries and achieves for each subquery a trade-off that depends on a parameter controlling the times for preprocessing and enumeration. The complexity of the query is given by the Pareto optimal points of a bi-objective optimisation program whose inputs are possible query decompositions and parameter values.

Cite as

Ahmet Kara, Milos Nikolic, Dan Olteanu, and Haozhe Zhang. Evaluation Trade-Offs for Acyclic Conjunctive Queries. In 31st EACSL Annual Conference on Computer Science Logic (CSL 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 252, pp. 29:1-29:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kara_et_al:LIPIcs.CSL.2023.29,
  author =	{Kara, Ahmet and Nikolic, Milos and Olteanu, Dan and Zhang, Haozhe},
  title =	{{Evaluation Trade-Offs for Acyclic Conjunctive Queries}},
  booktitle =	{31st EACSL Annual Conference on Computer Science Logic (CSL 2023)},
  pages =	{29:1--29:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-264-8},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{252},
  editor =	{Klin, Bartek and Pimentel, Elaine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CSL.2023.29},
  URN =		{urn:nbn:de:0030-drops-174907},
  doi =		{10.4230/LIPIcs.CSL.2023.29},
  annote =	{Keywords: acyclic queries, query evaluation, enumeration delay}
}
Document
Counting Triangles under Updates in Worst-Case Optimal Time

Authors: Ahmet Kara, Hung Q. Ngo, Milos Nikolic, Dan Olteanu, and Haozhe Zhang

Published in: LIPIcs, Volume 127, 22nd International Conference on Database Theory (ICDT 2019)


Abstract
We consider the problem of incrementally maintaining the triangle count query under single-tuple updates to the input relations. We introduce an approach that exhibits a space-time tradeoff such that the space-time product is quadratic in the size of the input database and the update time can be as low as the square root of this size. This lowest update time is worst-case optimal conditioned on the Online Matrix-Vector Multiplication conjecture. The classical and factorized incremental view maintenance approaches are recovered as special cases of our approach within the space-time tradeoff. In particular, they require linear-time maintenance under updates, which is suboptimal. Our approach can also count all triangles in a static database in the worst-case optimal time needed for enumerating them.

Cite as

Ahmet Kara, Hung Q. Ngo, Milos Nikolic, Dan Olteanu, and Haozhe Zhang. Counting Triangles under Updates in Worst-Case Optimal Time. In 22nd International Conference on Database Theory (ICDT 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 127, pp. 4:1-4:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kara_et_al:LIPIcs.ICDT.2019.4,
  author =	{Kara, Ahmet and Ngo, Hung Q. and Nikolic, Milos and Olteanu, Dan and Zhang, Haozhe},
  title =	{{Counting Triangles under Updates in Worst-Case Optimal Time}},
  booktitle =	{22nd International Conference on Database Theory (ICDT 2019)},
  pages =	{4:1--4:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-101-6},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{127},
  editor =	{Barcelo, Pablo and Calautti, Marco},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICDT.2019.4},
  URN =		{urn:nbn:de:0030-drops-103068},
  doi =		{10.4230/LIPIcs.ICDT.2019.4},
  annote =	{Keywords: incremental view maintenance, amortized analysis, data skew}
}

Zhang, Tianyu

Document
Reliable Dynamic Packet Scheduling over Lossy Real-Time Wireless Networks

Authors: Tao Gong, Tianyu Zhang, Xiaobo Sharon Hu, Qingxu Deng, Michael Lemmon, and Song Han

Published in: LIPIcs, Volume 133, 31st Euromicro Conference on Real-Time Systems (ECRTS 2019)


Abstract
Along with the rapid development and deployment of real-time wireless network (RTWN) technologies in a wide range of applications, effective packet scheduling algorithms have been playing a critical role in RTWNs for achieving desired Quality of Service (QoS) for real-time sensing and control, especially in the presence of unexpected disturbances. Most existing solutions in the literature focus either on static or dynamic schedule construction to meet the desired QoS requirements, but have a common assumption that all wireless links are reliable. Although this assumption simplifies the algorithm design and analysis, it is not realistic in real-life settings. To address this drawback, this paper introduces a novel reliable dynamic packet scheduling framework, called RD-PaS. RD-PaS can not only construct static schedules to meet both the timing and reliability requirements of end-to-end packet transmissions in RTWNs for a given periodic network traffic pattern, but also construct new schedules rapidly to handle abruptly increased network traffic induced by unexpected disturbances while minimizing the impact on existing network flows. The functional correctness of the RD-PaS framework has been validated through its implementation and deployment on a real-life RTWN testbed. Extensive simulation-based experiments have also been performed to evaluate the effectiveness of RD-PaS, especially in large-scale network settings.

Cite as

Tao Gong, Tianyu Zhang, Xiaobo Sharon Hu, Qingxu Deng, Michael Lemmon, and Song Han. Reliable Dynamic Packet Scheduling over Lossy Real-Time Wireless Networks. In 31st Euromicro Conference on Real-Time Systems (ECRTS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 133, pp. 11:1-11:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{gong_et_al:LIPIcs.ECRTS.2019.11,
  author =	{Gong, Tao and Zhang, Tianyu and Hu, Xiaobo Sharon and Deng, Qingxu and Lemmon, Michael and Han, Song},
  title =	{{Reliable Dynamic Packet Scheduling over Lossy Real-Time Wireless Networks}},
  booktitle =	{31st Euromicro Conference on Real-Time Systems (ECRTS 2019)},
  pages =	{11:1--11:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-110-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{133},
  editor =	{Quinton, Sophie},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2019.11},
  URN =		{urn:nbn:de:0030-drops-107482},
  doi =		{10.4230/LIPIcs.ECRTS.2019.11},
  annote =	{Keywords: Real-time wireless networks, lossy links, dynamic packet scheduling, reliability}
}

Zhang, Haowen

Document
Validating Paired-End Read Alignments in Sequence Graphs

Authors: Chirag Jain, Haowen Zhang, Alexander Dilthey, and Srinivas Aluru

Published in: LIPIcs, Volume 143, 19th International Workshop on Algorithms in Bioinformatics (WABI 2019)


Abstract
Graph based non-linear reference structures such as variation graphs and colored de Bruijn graphs enable incorporation of full genomic diversity within a population. However, transitioning from a simple string-based reference to graphs requires addressing many computational challenges, one of which concerns accurately mapping sequencing read sets to graphs. Paired-end Illumina sequencing is a commonly used sequencing platform in genomics, where the paired-end distance constraints allow disambiguation of repeats. Many recent works have explored provably good index-based and alignment-based strategies for mapping individual reads to graphs. However, validating distance constraints efficiently over graphs is not trivial, and existing sequence to graph mappers rely on heuristics. We introduce a mathematical formulation of the problem, and provide a new algorithm to solve it exactly. We take advantage of the high sparsity of reference graphs, and use sparse matrix-matrix multiplications (SpGEMM) to build an index which can be queried efficiently by a mapping algorithm for validating the distance constraints. Effectiveness of the algorithm is demonstrated using real reference graphs, including a human MHC variation graph, and a pan-genome de-Bruijn graph built using genomes of 20 B. anthracis strains. While the one-time indexing time can vary from a few minutes to a few hours using our algorithm, answering a million distance queries takes less than a second.

Cite as

Chirag Jain, Haowen Zhang, Alexander Dilthey, and Srinivas Aluru. Validating Paired-End Read Alignments in Sequence Graphs. In 19th International Workshop on Algorithms in Bioinformatics (WABI 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 143, pp. 17:1-17:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{jain_et_al:LIPIcs.WABI.2019.17,
  author =	{Jain, Chirag and Zhang, Haowen and Dilthey, Alexander and Aluru, Srinivas},
  title =	{{Validating Paired-End Read Alignments in Sequence Graphs}},
  booktitle =	{19th International Workshop on Algorithms in Bioinformatics (WABI 2019)},
  pages =	{17:1--17:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-123-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{143},
  editor =	{Huber, Katharina T. and Gusfield, Dan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2019.17},
  URN =		{urn:nbn:de:0030-drops-110470},
  doi =		{10.4230/LIPIcs.WABI.2019.17},
  annote =	{Keywords: Sequence graphs, read mapping, index, sparse matrix-matrix multiplication}
}

Zhang, Jiaheng

Document
Searching for Cryptogenography Upper Bounds via Sum of Square Programming

Authors: Dominik Scheder, Shuyang Tang, and Jiaheng Zhang

Published in: LIPIcs, Volume 149, 30th International Symposium on Algorithms and Computation (ISAAC 2019)


Abstract
Cryptogenography is a secret-leaking game in which one of n players is holding a secret to be leaked. The n players engage in communication as to (1) reveal the secret while (2) keeping the identity of the secret holder as obscure as possible. All communication is public, and no computational hardness assumptions are made, i.e., the setting is purely information theoretic. Brody, Jakobsen, Scheder, and Winkler [Joshua Brody et al., 2014] formally defined this problem, showed that it has an equivalent geometric characterization, and gave upper and lower bounds for the case in which the n players want to leak a single bit. Surprisingly, even the easiest case, where two players want to leak a secret consisting of a single bit, is not completely understood. Doerr and Künnemann [Benjamin Doerr and Marvin Künnemann, 2016] showed how to automatically search for good protocols using a computer, thus finding an improved protocol for the 1-bit two-player case. In this work, we show how the search for upper bounds (impossibility results) can be formulated as a Sum of Squares program. We implement this idea for the 1-bit two-player case and significantly improve the previous upper bound from 47/128 = 0.3671875 to 0.35183.

Cite as

Dominik Scheder, Shuyang Tang, and Jiaheng Zhang. Searching for Cryptogenography Upper Bounds via Sum of Square Programming. In 30th International Symposium on Algorithms and Computation (ISAAC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 149, pp. 31:1-31:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{scheder_et_al:LIPIcs.ISAAC.2019.31,
  author =	{Scheder, Dominik and Tang, Shuyang and Zhang, Jiaheng},
  title =	{{Searching for Cryptogenography Upper Bounds via Sum of Square Programming}},
  booktitle =	{30th International Symposium on Algorithms and Computation (ISAAC 2019)},
  pages =	{31:1--31:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-130-6},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{149},
  editor =	{Lu, Pinyan and Zhang, Guochuan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2019.31},
  URN =		{urn:nbn:de:0030-drops-115276},
  doi =		{10.4230/LIPIcs.ISAAC.2019.31},
  annote =	{Keywords: Communication Complexity, Secret Leaking, Sum of Squares Programming}
}

Zhang, Zhen

Document
Faster Approximation Schemes for (Constrained) k-Means with Outliers

Authors: Zhen Zhang, Junyu Huang, and Qilong Feng

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
Given a set of n points in ℝ^d and two positive integers k and m, the Euclidean k-means with outliers problem aims to remove at most m points, referred to as outliers, and minimize the k-means cost function for the remaining points. Developing algorithms for this problem remains an active area of research due to its prevalence in applications involving noisy data. In this paper, we give a (1+ε)-approximation algorithm that runs in n²d((k+m)ε^{-1})^O(kε^{-1}) time for the problem. When combined with a coreset construction method, the running time of the algorithm can be improved to be linear in n. For the case where k is a constant, this represents the first polynomial-time approximation scheme for the problem: Existing algorithms with the same approximation guarantee run in polynomial time only when both k and m are constants. Furthermore, our approach generalizes to variants of k-means with outliers incorporating additional constraints on instances, such as those related to capacities and fairness.

Cite as

Zhen Zhang, Junyu Huang, and Qilong Feng. Faster Approximation Schemes for (Constrained) k-Means with Outliers. In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 84:1-84:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.MFCS.2024.84,
  author =	{Zhang, Zhen and Huang, Junyu and Feng, Qilong},
  title =	{{Faster Approximation Schemes for (Constrained) k-Means with Outliers}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{84:1--84:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.84},
  URN =		{urn:nbn:de:0030-drops-206408},
  doi =		{10.4230/LIPIcs.MFCS.2024.84},
  annote =	{Keywords: Approximation algorithms, clustering}
}
Document
A Unified Framework of FPT Approximation Algorithms for Clustering Problems

Authors: Qilong Feng, Zhen Zhang, Ziyun Huang, Jinhui Xu, and Jianxin Wang

Published in: LIPIcs, Volume 181, 31st International Symposium on Algorithms and Computation (ISAAC 2020)


Abstract
In this paper, we present a framework for designing FPT approximation algorithms for many k-clustering problems. Our results are based on a new technique for reducing search spaces. A reduced search space is a small subset of the input data that has the guarantee of containing k clients close to the facilities opened in an optimal solution for any clustering problem we consider. We show, somewhat surprisingly, that greedily sampling O(k) clients yields the desired reduced search space, based on which we obtain FPT(k)-time algorithms with improved approximation guarantees for problems such as capacitated clustering, lower-bounded clustering, clustering with service installation costs, fault tolerant clustering, and priority clustering.

Cite as

Qilong Feng, Zhen Zhang, Ziyun Huang, Jinhui Xu, and Jianxin Wang. A Unified Framework of FPT Approximation Algorithms for Clustering Problems. In 31st International Symposium on Algorithms and Computation (ISAAC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 181, pp. 5:1-5:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{feng_et_al:LIPIcs.ISAAC.2020.5,
  author =	{Feng, Qilong and Zhang, Zhen and Huang, Ziyun and Xu, Jinhui and Wang, Jianxin},
  title =	{{A Unified Framework of FPT Approximation Algorithms for Clustering Problems}},
  booktitle =	{31st International Symposium on Algorithms and Computation (ISAAC 2020)},
  pages =	{5:1--5:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-173-3},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{181},
  editor =	{Cao, Yixin and Cheng, Siu-Wing and Li, Minming},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2020.5},
  URN =		{urn:nbn:de:0030-drops-133495},
  doi =		{10.4230/LIPIcs.ISAAC.2020.5},
  annote =	{Keywords: clustering, approximation algorithms, fixed-parameter tractability}
}
Document
Improved Algorithms for Clustering with Outliers

Authors: Qilong Feng, Zhen Zhang, Ziyun Huang, Jinhui Xu, and Jianxin Wang

Published in: LIPIcs, Volume 149, 30th International Symposium on Algorithms and Computation (ISAAC 2019)


Abstract
Clustering is a fundamental problem in unsupervised learning. In many real-world applications, the to-be-clustered data often contains various types of noises and thus needs to be removed from the learning process. To address this issue, we consider in this paper two variants of such clustering problems, called k-median with m outliers and k-means with m outliers. Existing techniques for both problems either incur relatively large approximation ratios or can only efficiently deal with a small number of outliers. In this paper, we present improved solution to each of them for the case where k is a fixed number and m could be quite large. Particularly, we gave the first PTAS for the k-median problem with outliers in Euclidean space R^d for possibly high m and d. Our algorithm runs in O(nd((1/epsilon)(k+m))^(k/epsilon)^O(1)) time, which considerably improves the previous result (with running time O(nd(m+k)^O(m+k) + (1/epsilon)k log n)^O(1))) given by [Feldman and Schulman, SODA 2012]. For the k-means with outliers problem, we introduce a (6+epsilon)-approximation algorithm for general metric space with running time O(n(beta (1/epsilon)(k+m))^k) for some constant beta>1. Our algorithm first uses the k-means++ technique to sample O((1/epsilon)(k+m)) points from input and then select the k centers from them. Compared to the more involving existing techniques, our algorithms are much simpler, i.e., using only random sampling, and achieving better performance ratios.

Cite as

Qilong Feng, Zhen Zhang, Ziyun Huang, Jinhui Xu, and Jianxin Wang. Improved Algorithms for Clustering with Outliers. In 30th International Symposium on Algorithms and Computation (ISAAC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 149, pp. 61:1-61:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{feng_et_al:LIPIcs.ISAAC.2019.61,
  author =	{Feng, Qilong and Zhang, Zhen and Huang, Ziyun and Xu, Jinhui and Wang, Jianxin},
  title =	{{Improved Algorithms for Clustering with Outliers}},
  booktitle =	{30th International Symposium on Algorithms and Computation (ISAAC 2019)},
  pages =	{61:1--61:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-130-6},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{149},
  editor =	{Lu, Pinyan and Zhang, Guochuan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2019.61},
  URN =		{urn:nbn:de:0030-drops-115573},
  doi =		{10.4230/LIPIcs.ISAAC.2019.61},
  annote =	{Keywords: Clustering with Outliers, Approximation, Random Sampling}
}

Zhang, Yihan

Document
Track A: Algorithms, Complexity and Games
Zero-Rate Thresholds and New Capacity Bounds for List-Decoding and List-Recovery

Authors: Nicolas Resch, Chen Yuan, and Yihan Zhang

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
In this work we consider the list-decodability and list-recoverability of arbitrary q-ary codes, for all integer values of q ≥ 2. A code is called (p,L)_q-list-decodable if every radius pn Hamming ball contains less than L codewords; (p,𝓁,L)_q-list-recoverability is a generalization where we place radius pn Hamming balls on every point of a combinatorial rectangle with side length 𝓁 and again stipulate that there be less than L codewords. Our main contribution is to precisely calculate the maximum value of p for which there exist infinite families of positive rate (p,𝓁,L)_q-list-recoverable codes, the quantity we call the zero-rate threshold. Denoting this value by p_*, we in fact show that codes correcting a p_*+ε fraction of errors must have size O_ε(1), i.e., independent of n. Such a result is typically referred to as a "Plotkin bound." To complement this, a standard random code with expurgation construction shows that there exist positive rate codes correcting a p_*-ε fraction of errors. We also follow a classical proof template (typically attributed to Elias and Bassalygo) to derive from the zero-rate threshold other tradeoffs between rate and decoding radius for list-decoding and list-recovery. Technically, proving the Plotkin bound boils down to demonstrating the Schur convexity of a certain function defined on the q-simplex as well as the convexity of a univariate function derived from it. We remark that an earlier argument claimed similar results for q-ary list-decoding; however, we point out that this earlier proof is flawed.

Cite as

Nicolas Resch, Chen Yuan, and Yihan Zhang. Zero-Rate Thresholds and New Capacity Bounds for List-Decoding and List-Recovery. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 99:1-99:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{resch_et_al:LIPIcs.ICALP.2023.99,
  author =	{Resch, Nicolas and Yuan, Chen and Zhang, Yihan},
  title =	{{Zero-Rate Thresholds and New Capacity Bounds for List-Decoding and List-Recovery}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{99:1--99:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.99},
  URN =		{urn:nbn:de:0030-drops-181518},
  doi =		{10.4230/LIPIcs.ICALP.2023.99},
  annote =	{Keywords: Coding theory, List-decoding, List-recovery, Zero-rate thresholds}
}
Document
Generalized List Decoding

Authors: Yihan Zhang, Amitalok J. Budkuley, and Sidharth Jaggi

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
This paper concerns itself with the question of list decoding for general adversarial channels, e.g., bit-flip (XOR) channels, erasure channels, AND (Z-) channels, OR channels, real adder channels, noisy typewriter channels, etc. We precisely characterize when exponential-sized (or positive rate) (L-1)-list decodable codes (where the list size L is a universal constant) exist for such channels. Our criterion essentially asserts that: For any given general adversarial channel, it is possible to construct positive rate (L-1)-list decodable codes if and only if the set of completely positive tensors of order-L with admissible marginals is not entirely contained in the order-L confusability set associated to the channel. The sufficiency is shown via random code construction (combined with expurgation or time-sharing). The necessity is shown by 1) extracting approximately equicoupled subcodes (generalization of equidistant codes) from any using hypergraph Ramsey’s theorem, and 2) significantly extending the classic Plotkin bound in coding theory to list decoding for general channels using duality between the completely positive tensor cone and the copositive tensor cone. In the proof, we also obtain a new fact regarding asymmetry of joint distributions, which may be of independent interest. Other results include 1) List decoding capacity with asymptotically large L for general adversarial channels; 2) A tight list size bound for most constant composition codes (generalization of constant weight codes); 3) Rederivation and demystification of Blinovsky’s [Blinovsky, 1986] characterization of the list decoding Plotkin points (threshold at which large codes are impossible) for bit-flip channels; 4) Evaluation of general bounds [Wang et al., 2019] for unique decoding in the error correction code setting.

Cite as

Yihan Zhang, Amitalok J. Budkuley, and Sidharth Jaggi. Generalized List Decoding. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 51:1-51:83, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.ITCS.2020.51,
  author =	{Zhang, Yihan and Budkuley, Amitalok J. and Jaggi, Sidharth},
  title =	{{Generalized List Decoding}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{51:1--51:83},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.51},
  URN =		{urn:nbn:de:0030-drops-117368},
  doi =		{10.4230/LIPIcs.ITCS.2020.51},
  annote =	{Keywords: Generalized Plotkin bound, general adversarial channels, equicoupled codes, random coding, completely positive tensors, copositive tensors, hypergraph Ramsey theory}
}

Zhang, Simon

Document
GPU-Accelerated Computation of Vietoris-Rips Persistence Barcodes

Authors: Simon Zhang, Mengbai Xiao, and Hao Wang

Published in: LIPIcs, Volume 164, 36th International Symposium on Computational Geometry (SoCG 2020)


Abstract
The computation of Vietoris-Rips persistence barcodes is both execution-intensive and memory-intensive. In this paper, we study the computational structure of Vietoris-Rips persistence barcodes, and identify several unique mathematical properties and algorithmic opportunities with connections to the GPU. Mathematically and empirically, we look into the properties of apparent pairs, which are independently identifiable persistence pairs comprising up to 99% of persistence pairs. We give theoretical upper and lower bounds of the apparent pair rate and model the average case. We also design massively parallel algorithms to take advantage of the very large number of simplices that can be processed independently of each other. Having identified these opportunities, we develop a GPU-accelerated software for computing Vietoris-Rips persistence barcodes, called Ripser++. The software achieves up to 30x speedup over the total execution time of the original Ripser and also reduces CPU-memory usage by up to 2.0x. We believe our GPU-acceleration based efforts open a new chapter for the advancement of topological data analysis in the post-Moore’s Law era.

Cite as

Simon Zhang, Mengbai Xiao, and Hao Wang. GPU-Accelerated Computation of Vietoris-Rips Persistence Barcodes. In 36th International Symposium on Computational Geometry (SoCG 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 164, pp. 70:1-70:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.SoCG.2020.70,
  author =	{Zhang, Simon and Xiao, Mengbai and Wang, Hao},
  title =	{{GPU-Accelerated Computation of Vietoris-Rips Persistence Barcodes}},
  booktitle =	{36th International Symposium on Computational Geometry (SoCG 2020)},
  pages =	{70:1--70:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-143-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{164},
  editor =	{Cabello, Sergio and Chen, Danny Z.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2020.70},
  URN =		{urn:nbn:de:0030-drops-122287},
  doi =		{10.4230/LIPIcs.SoCG.2020.70},
  annote =	{Keywords: Parallel Algorithms, Topological Data Analysis, Vietoris-Rips, Persistent Homology, Apparent Pairs, High Performance Computing, GPU, Random Graphs}
}

Zhang, Brandon

Document
CG Challenge
Conflict-Based Local Search for Minimum Partition into Plane Subgraphs (CG Challenge)

Authors: Jack Spalding-Jamieson, Brandon Zhang, and Da Wei Zheng

Published in: LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)


Abstract
This paper examines the approach taken by team gitastrophe in the CG:SHOP 2022 challenge. The challenge was to partition the edges of a geometric graph, with vertices represented by points in the plane and edges as straight lines, into the minimum number of planar subgraphs. We used a simple variation of a conflict optimizer strategy used by team Shadoks in the previous year’s CG:SHOP to rank second in the challenge.

Cite as

Jack Spalding-Jamieson, Brandon Zhang, and Da Wei Zheng. Conflict-Based Local Search for Minimum Partition into Plane Subgraphs (CG Challenge). In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 72:1-72:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{spaldingjamieson_et_al:LIPIcs.SoCG.2022.72,
  author =	{Spalding-Jamieson, Jack and Zhang, Brandon and Zheng, Da Wei},
  title =	{{Conflict-Based Local Search for Minimum Partition into Plane Subgraphs}},
  booktitle =	{38th International Symposium on Computational Geometry (SoCG 2022)},
  pages =	{72:1--72:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-227-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{224},
  editor =	{Goaoc, Xavier and Kerber, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.72},
  URN =		{urn:nbn:de:0030-drops-160807},
  doi =		{10.4230/LIPIcs.SoCG.2022.72},
  annote =	{Keywords: local search, planar graph, graph colouring, geometric graph, conflict optimizer}
}
Document
CG Challenge
Coordinated Motion Planning Through Randomized k-Opt (CG Challenge)

Authors: Paul Liu, Jack Spalding-Jamieson, Brandon Zhang, and Da Wei Zheng

Published in: LIPIcs, Volume 189, 37th International Symposium on Computational Geometry (SoCG 2021)


Abstract
This paper examines the approach taken by team gitastrophe in the CG:SHOP 2021 challenge. The challenge was to find a sequence of simultaneous moves of square robots between two given configurations that minimized either total distance travelled or makespan (total time). Our winning approach has two main components: an initialization phase that finds a good initial solution, and a k-opt local search phase which optimizes this solution. This led to a first place finish in the distance category and a third place finish in the makespan category.

Cite as

Paul Liu, Jack Spalding-Jamieson, Brandon Zhang, and Da Wei Zheng. Coordinated Motion Planning Through Randomized k-Opt (CG Challenge). In 37th International Symposium on Computational Geometry (SoCG 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 189, pp. 64:1-64:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.SoCG.2021.64,
  author =	{Liu, Paul and Spalding-Jamieson, Jack and Zhang, Brandon and Zheng, Da Wei},
  title =	{{Coordinated Motion Planning Through Randomized k-Opt}},
  booktitle =	{37th International Symposium on Computational Geometry (SoCG 2021)},
  pages =	{64:1--64:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-184-9},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{189},
  editor =	{Buchin, Kevin and Colin de Verdi\`{e}re, \'{E}ric},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2021.64},
  URN =		{urn:nbn:de:0030-drops-138635},
  doi =		{10.4230/LIPIcs.SoCG.2021.64},
  annote =	{Keywords: motion planning, randomized local search, path finding}
}
Document
CG Challenge
Computing Low-Cost Convex Partitions for Planar Point Sets with Randomized Local Search and Constraint Programming (CG Challenge)

Authors: Da Wei Zheng, Jack Spalding-Jamieson, and Brandon Zhang

Published in: LIPIcs, Volume 164, 36th International Symposium on Computational Geometry (SoCG 2020)


Abstract
The Minimum Convex Partition problem (MCP) is a problem in which a point-set is used as the vertices for a planar subdivision, whose number of edges is to be minimized. In this planar subdivision, the outer face is the convex hull of the point-set, and the interior faces are convex. In this paper, we discuss and implement the approach to this problem using randomized local search, and different initialization techniques based on maximizing collinearity. We also solve small instances optimally using a SAT formulation. We explored this as part of the 2020 Computational Geometry Challenge, where we placed first as Team UBC.

Cite as

Da Wei Zheng, Jack Spalding-Jamieson, and Brandon Zhang. Computing Low-Cost Convex Partitions for Planar Point Sets with Randomized Local Search and Constraint Programming (CG Challenge). In 36th International Symposium on Computational Geometry (SoCG 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 164, pp. 83:1-83:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{zheng_et_al:LIPIcs.SoCG.2020.83,
  author =	{Zheng, Da Wei and Spalding-Jamieson, Jack and Zhang, Brandon},
  title =	{{Computing Low-Cost Convex Partitions for Planar Point Sets with Randomized Local Search and Constraint Programming}},
  booktitle =	{36th International Symposium on Computational Geometry (SoCG 2020)},
  pages =	{83:1--83:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-143-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{164},
  editor =	{Cabello, Sergio and Chen, Danny Z.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2020.83},
  URN =		{urn:nbn:de:0030-drops-122412},
  doi =		{10.4230/LIPIcs.SoCG.2020.83},
  annote =	{Keywords: convex partition, randomized local search, planar point sets}
}

Zhang, Wenbo

Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Bisimulation Equivalence of Pushdown Automata Is Ackermann-Complete

Authors: Wenbo Zhang, Qiang Yin, Huan Long, and Xian Xu

Published in: LIPIcs, Volume 168, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)


Abstract
Deciding bisimulation equivalence of two pushdown automata is one of the most fundamental problems in formal verification. Though Sénizergues established decidability of this problem in 1998, it has taken a long time to understand its complexity: the problem was proven to be non-elementary in 2013, and only recently, Jančar and Schmitz showed that it has an Ackermann upper bound. We improve the lower bound to Ackermann-hard, and thus close the complexity gap.

Cite as

Wenbo Zhang, Qiang Yin, Huan Long, and Xian Xu. Bisimulation Equivalence of Pushdown Automata Is Ackermann-Complete. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 141:1-141:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.ICALP.2020.141,
  author =	{Zhang, Wenbo and Yin, Qiang and Long, Huan and Xu, Xian},
  title =	{{Bisimulation Equivalence of Pushdown Automata Is Ackermann-Complete}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{141:1--141:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.141},
  URN =		{urn:nbn:de:0030-drops-125482},
  doi =		{10.4230/LIPIcs.ICALP.2020.141},
  annote =	{Keywords: PDA, Bisimulation, Equivalence checking}
}

Zhang, Ruizhe

Document
Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time

Authors: Zhao Song, Lichen Zhang, and Ruizhe Zhang

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We consider the problem of training a multi-layer over-parametrized neural network to minimize the empirical risk induced by a loss function. In the typical setting of over-parametrization, the network width m is much larger than the data dimension d and the number of training samples n (m = poly(n,d)), which induces a prohibitive large weight matrix W ∈ ℝ^{m× m} per layer. Naively, one has to pay O(m²) time to read the weight matrix and evaluate the neural network function in both forward and backward computation. In this work, we show how to reduce the training cost per iteration. Specifically, we propose a framework that uses m² cost only in the initialization phase and achieves a truly subquadratic cost per iteration in terms of m, i.e., m^{2-Ω(1)} per iteration. Our result has implications beyond standard over-parametrization theory, as it can be viewed as designing an efficient data structure on top of a pre-trained large model to further speed up the fine-tuning process, a core procedure to deploy large language models (LLM).

Cite as

Zhao Song, Lichen Zhang, and Ruizhe Zhang. Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 93:1-93:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{song_et_al:LIPIcs.ITCS.2024.93,
  author =	{Song, Zhao and Zhang, Lichen and Zhang, Ruizhe},
  title =	{{Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{93:1--93:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.93},
  URN =		{urn:nbn:de:0030-drops-196212},
  doi =		{10.4230/LIPIcs.ITCS.2024.93},
  annote =	{Keywords: Deep learning theory, Nonconvex optimization}
}
Document
Track A: Algorithms, Complexity and Games
A Hyperbolic Extension of Kadison-Singer Type Results

Authors: Ruizhe Zhang and Xinzhi Zhang

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
In 2013, Marcus, Spielman, and Srivastava resolved the famous Kadison-Singer conjecture. It states that for n independent random vectors v_1,⋯, v_n that have expected squared norm bounded by ε and are in the isotropic position in expectation, there is a positive probability that the determinant polynomial det(xI - ∑_{i=1}^n v_i v_i^⊤) has roots bounded by (1 + √ε)². An interpretation of the Kadison-Singer theorem is that we can always find a partition of the vectors v_1,⋯,v_n into two sets with a low discrepancy in terms of the spectral norm (in other words, rely on the determinant polynomial). In this paper, we provide two results for a broader class of polynomials, the hyperbolic polynomials. Furthermore, our results are in two generalized settings: - The first one shows that the Kadison-Singer result requires a weaker assumption that the vectors have a bounded sum of hyperbolic norms. - The second one relaxes the Kadison-Singer result’s distribution assumption to the Strongly Rayleigh distribution. To the best of our knowledge, the previous results only support determinant polynomials [Anari and Oveis Gharan'14, Kyng, Luh and Song'20]. It is unclear whether they can be generalized to a broader class of polynomials. In addition, we also provide a sub-exponential time algorithm for constructing our results.

Cite as

Ruizhe Zhang and Xinzhi Zhang. A Hyperbolic Extension of Kadison-Singer Type Results. In 50th Int