13 Search Results for "Zhang, Peng"


Document
Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time

Authors: Zhao Song, Lichen Zhang, and Ruizhe Zhang

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We consider the problem of training a multi-layer over-parametrized neural network to minimize the empirical risk induced by a loss function. In the typical setting of over-parametrization, the network width m is much larger than the data dimension d and the number of training samples n (m = poly(n,d)), which induces a prohibitive large weight matrix W ∈ ℝ^{m× m} per layer. Naively, one has to pay O(m²) time to read the weight matrix and evaluate the neural network function in both forward and backward computation. In this work, we show how to reduce the training cost per iteration. Specifically, we propose a framework that uses m² cost only in the initialization phase and achieves a truly subquadratic cost per iteration in terms of m, i.e., m^{2-Ω(1)} per iteration. Our result has implications beyond standard over-parametrization theory, as it can be viewed as designing an efficient data structure on top of a pre-trained large model to further speed up the fine-tuning process, a core procedure to deploy large language models (LLM).

Cite as

Zhao Song, Lichen Zhang, and Ruizhe Zhang. Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 93:1-93:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{song_et_al:LIPIcs.ITCS.2024.93,
  author =	{Song, Zhao and Zhang, Lichen and Zhang, Ruizhe},
  title =	{{Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{93:1--93:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.93},
  URN =		{urn:nbn:de:0030-drops-196212},
  doi =		{10.4230/LIPIcs.ITCS.2024.93},
  annote =	{Keywords: Deep learning theory, Nonconvex optimization}
}
Document
Distance Queries over Dynamic Interval Graphs

Authors: Jingbang Chen, Meng He, J. Ian Munro, Richard Peng, Kaiyu Wu, and Daniel J. Zhang

Published in: LIPIcs, Volume 283, 34th International Symposium on Algorithms and Computation (ISAAC 2023)


Abstract
We design the first dynamic distance oracles for interval graphs, which are intersection graphs of a set of intervals on the real line, and for proper interval graphs, which are intersection graphs of a set of intervals in which no interval is properly contained in another. For proper interval graphs, we design a linear space data structure which supports distance queries (computing the distance between two query vertices) and vertex insertion or deletion in O(lg n) worst-case time, where n is the number of vertices currently in G. Under incremental (insertion only) or decremental (deletion only) settings, we design linear space data structures that support distance queries in O(lg n) worst-case time and vertex insertion or deletion in O(lg n) amortized time, where n is the maximum number of vertices in the graph. Under fully dynamic settings, we design a data structure that represents an interval graph G in O(n) words of space to support distance queries in O(n lg n/S(n)) worst-case time and vertex insertion or deletion in O(S(n)+lg n) worst-case time, where n is the number of vertices currently in G and S(n) is an arbitrary function that satisfies S(n) = Ω(1) and S(n) = O(n). This implies an O(n)-word solution with O(√{nlg n})-time support for both distance queries and updates. All four data structures can answer shortest path queries by reporting the vertices in the shortest path between two query vertices in O(lg n) worst-case time per vertex. We also study the hardness of supporting distance queries under updates over an intersection graph of 3D axis-aligned line segments, which generalizes our problem to 3D. Finally, we solve the problem of computing the diameter of a dynamic connected interval graph.

Cite as

Jingbang Chen, Meng He, J. Ian Munro, Richard Peng, Kaiyu Wu, and Daniel J. Zhang. Distance Queries over Dynamic Interval Graphs. In 34th International Symposium on Algorithms and Computation (ISAAC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 283, pp. 18:1-18:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ISAAC.2023.18,
  author =	{Chen, Jingbang and He, Meng and Munro, J. Ian and Peng, Richard and Wu, Kaiyu and Zhang, Daniel J.},
  title =	{{Distance Queries over Dynamic Interval Graphs}},
  booktitle =	{34th International Symposium on Algorithms and Computation (ISAAC 2023)},
  pages =	{18:1--18:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-289-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{283},
  editor =	{Iwata, Satoru and Kakimura, Naonori},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2023.18},
  URN =		{urn:nbn:de:0030-drops-193207},
  doi =		{10.4230/LIPIcs.ISAAC.2023.18},
  annote =	{Keywords: interval graph, proper interval graph, intersection graph, geometric intersection graph, distance oracle, distance query, shortest path query, dynamic graph}
}
Document
Efficient 1-Laplacian Solvers for Well-Shaped Simplicial Complexes: Beyond Betti Numbers and Collapsing Sequences

Authors: Ming Ding and Peng Zhang

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
We present efficient algorithms for solving systems of linear equations in 1-Laplacians of well-shaped simplicial complexes. 1-Laplacians, or higher-dimensional Laplacians, generalize graph Laplacians to higher-dimensional simplicial complexes and play a key role in computational topology and topological data analysis. Previously, nearly-linear time solvers were developed for simplicial complexes with known collapsing sequences and bounded Betti numbers, such as those triangulating a three-ball in ℝ³ (Cohen, Fasy, Miller, Nayyeri, Peng, and Walkington [SODA'2014], Black, Maxwell, Nayyeri, and Winkelman [SODA'2022], Black and Nayyeri [ICALP'2022]). Furthermore, Nested Dissection provides quadratic time solvers for more general systems with nonzero structures representing well-shaped simplicial complexes embedded in ℝ³. We generalize the specialized solvers for 1-Laplacians to simplicial complexes with additional geometric structures but without collapsing sequences and bounded Betti numbers, and we improve the runtime of Nested Dissection. We focus on simplicial complexes that meet two conditions: (1) each individual simplex has a bounded aspect ratio, and (2) they can be divided into "disjoint" and balanced regions with well-shaped interiors and boundaries. Our solvers draw inspiration from the Incomplete Nested Dissection for stiffness matrices of well-shaped trusses (Kyng, Peng, Schwieterman, and Zhang [STOC'2018]).

Cite as

Ming Ding and Peng Zhang. Efficient 1-Laplacian Solvers for Well-Shaped Simplicial Complexes: Beyond Betti Numbers and Collapsing Sequences. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 41:1-41:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{ding_et_al:LIPIcs.ESA.2023.41,
  author =	{Ding, Ming and Zhang, Peng},
  title =	{{Efficient 1-Laplacian Solvers for Well-Shaped Simplicial Complexes: Beyond Betti Numbers and Collapsing Sequences}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{41:1--41:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.41},
  URN =		{urn:nbn:de:0030-drops-186947},
  doi =		{10.4230/LIPIcs.ESA.2023.41},
  annote =	{Keywords: 1-Laplacian Solvers, Simplicial Complexes, Incomplete Nested Dissection}
}
Document
APPROX
Hardness Results for Weaver’s Discrepancy Problem

Authors: Daniel A. Spielman and Peng Zhang

Published in: LIPIcs, Volume 245, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)


Abstract
Marcus, Spielman and Srivastava (Annals of Mathematics 2014) solved the Kadison-Singer Problem by proving a strong form of Weaver’s conjecture: they showed that for all α > 0 and all lists of vectors of norm at most √α whose outer products sum to the identity, there exists a signed sum of those outer products with operator norm at most √{8α} + 2α. We prove that it is NP-hard to distinguish such a list of vectors for which there is a signed sum that equals the zero matrix from those in which every signed sum has operator norm at least η √α, for some absolute constant η > 0. Thus, it is NP-hard to construct a signing that is a constant factor better than that guaranteed to exist. For α = 1/4, we prove that it is NP-hard to distinguish whether there is a signed sum that equals the zero matrix from the case in which every signed sum has operator norm at least 1/4.

Cite as

Daniel A. Spielman and Peng Zhang. Hardness Results for Weaver’s Discrepancy Problem. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 245, pp. 40:1-40:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{spielman_et_al:LIPIcs.APPROX/RANDOM.2022.40,
  author =	{Spielman, Daniel A. and Zhang, Peng},
  title =	{{Hardness Results for Weaver’s Discrepancy Problem}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)},
  pages =	{40:1--40:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-249-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{245},
  editor =	{Chakrabarti, Amit and Swamy, Chaitanya},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2022.40},
  URN =		{urn:nbn:de:0030-drops-171628},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2022.40},
  annote =	{Keywords: Discrepancy Problem, Kadison-Singer Problem, Hardness of Approximation}
}
Document
Vertex Sparsifiers for Hyperedge Connectivity

Authors: Han Jiang, Shang-En Huang, Thatchaphol Saranurak, and Tian Zhang

Published in: LIPIcs, Volume 244, 30th Annual European Symposium on Algorithms (ESA 2022)


Abstract
Recently, Chalermsook et al. {[}SODA'21{]} introduces a notion of vertex sparsifiers for c-edge connectivity, which has found applications in parameterized algorithms for network design and also led to exciting dynamic algorithms for c-edge st-connectivity {[}Jin and Sun FOCS'22{]}. We study a natural extension called vertex sparsifiers for c-hyperedge connectivity and construct a sparsifier whose size matches the state-of-the-art for normal graphs. More specifically, we show that, given a hypergraph G = (V,E) with n vertices and m hyperedges with k terminal vertices and a parameter c, there exists a hypergraph H containing only O(kc³) hyperedges that preserves all minimum cuts (up to value c) between all subset of terminals. This matches the best bound of O(kc³) edges for normal graphs by [Liu'20]. Moreover, H can be constructed in almost-linear O(p^{1+o(1)} + n(rclog n)^{O(rc)}log m) time where r = max_{e ∈ E}|e| is the rank of G and p = ∑_{e ∈ E}|e| is the total size of G, or in poly(m, n) time if we slightly relax the size to O(kc³log^{1.5}(kc)) hyperedges.

Cite as

Han Jiang, Shang-En Huang, Thatchaphol Saranurak, and Tian Zhang. Vertex Sparsifiers for Hyperedge Connectivity. In 30th Annual European Symposium on Algorithms (ESA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 244, pp. 70:1-70:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{jiang_et_al:LIPIcs.ESA.2022.70,
  author =	{Jiang, Han and Huang, Shang-En and Saranurak, Thatchaphol and Zhang, Tian},
  title =	{{Vertex Sparsifiers for Hyperedge Connectivity}},
  booktitle =	{30th Annual European Symposium on Algorithms (ESA 2022)},
  pages =	{70:1--70:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-247-1},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{244},
  editor =	{Chechik, Shiri and Navarro, Gonzalo and Rotenberg, Eva and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2022.70},
  URN =		{urn:nbn:de:0030-drops-170081},
  doi =		{10.4230/LIPIcs.ESA.2022.70},
  annote =	{Keywords: Vertex sparsifier, hypergraph, connectivity}
}
Document
Track A: Algorithms, Complexity and Games
Hardness Results for Laplacians of Simplicial Complexes via Sparse-Linear Equation Complete Gadgets

Authors: Ming Ding, Rasmus Kyng, Maximilian Probst Gutenberg, and Peng Zhang

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We study linear equations in combinatorial Laplacians of k-dimensional simplicial complexes (k-complexes), a natural generalization of graph Laplacians. Combinatorial Laplacians play a crucial role in homology and are a central tool in topology. Beyond this, they have various applications in data analysis and physical modeling problems. It is known that nearly-linear time solvers exist for graph Laplacians. However, nearly-linear time solvers for combinatorial Laplacians are only known for restricted classes of complexes. This paper shows that linear equations in combinatorial Laplacians of 2-complexes are as hard to solve as general linear equations. More precisely, for any constant c ≥ 1, if we can solve linear equations in combinatorial Laplacians of 2-complexes up to high accuracy in time Õ((# of nonzero coefficients)^c), then we can solve general linear equations with polynomially bounded integer coefficients and condition numbers up to high accuracy in time Õ((# of nonzero coefficients)^c). We prove this by a nearly-linear time reduction from general linear equations to combinatorial Laplacians of 2-complexes. Our reduction preserves the sparsity of the problem instances up to poly-logarithmic factors.

Cite as

Ming Ding, Rasmus Kyng, Maximilian Probst Gutenberg, and Peng Zhang. Hardness Results for Laplacians of Simplicial Complexes via Sparse-Linear Equation Complete Gadgets. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 53:1-53:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ding_et_al:LIPIcs.ICALP.2022.53,
  author =	{Ding, Ming and Kyng, Rasmus and Gutenberg, Maximilian Probst and Zhang, Peng},
  title =	{{Hardness Results for Laplacians of Simplicial Complexes via Sparse-Linear Equation Complete Gadgets}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{53:1--53:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.53},
  URN =		{urn:nbn:de:0030-drops-163945},
  doi =		{10.4230/LIPIcs.ICALP.2022.53},
  annote =	{Keywords: Simplicial Complexes, Combinatorial Laplacians, Linear Equations, Fine-Grained Complexity}
}
Document
Track A: Algorithms, Complexity and Games
Two-Commodity Flow Is Equivalent to Linear Programming Under Nearly-Linear Time Reductions

Authors: Ming Ding, Rasmus Kyng, and Peng Zhang

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We give a nearly-linear time reduction that encodes any linear program as a 2-commodity flow problem with only a small blow-up in size. Under mild assumptions similar to those employed by modern fast solvers for linear programs, our reduction causes only a polylogarithmic multiplicative increase in the size of the program, and runs in nearly-linear time. Our reduction applies to high-accuracy approximation algorithms and exact algorithms. Given an approximate solution to the 2-commodity flow problem, we can extract a solution to the linear program in linear time with only a polynomial factor increase in the error. This implies that any algorithm that solves the 2-commodity flow problem can solve linear programs in essentially the same time. Given a directed graph with edge capacities and two source-sink pairs, the goal of the 2-commodity flow problem is to maximize the sum of the flows routed between the two source-sink pairs subject to edge capacities and flow conservation. A 2-commodity flow problem can be formulated as a linear program, which can be solved to high accuracy in almost the current matrix multiplication time (Cohen-Lee-Song JACM'21). Our reduction shows that linear programs can be approximately solved, to high accuracy, using 2-commodity flow as well. Our proof follows the outline of Itai’s polynomial-time reduction of a linear program to a 2-commodity flow problem (JACM’78). Itai’s reduction shows that exactly solving 2-commodity flow and exactly solving linear programming are polynomial-time equivalent. We improve Itai’s reduction to nearly preserve the problem representation size in each step. In addition, we establish an error bound for approximately solving each intermediate problem in the reduction, and show that the accumulated error is polynomially bounded. We remark that our reduction does not run in strongly polynomial time and that it is open whether 2-commodity flow and linear programming are equivalent in strongly polynomial time.

Cite as

Ming Ding, Rasmus Kyng, and Peng Zhang. Two-Commodity Flow Is Equivalent to Linear Programming Under Nearly-Linear Time Reductions. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 54:1-54:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ding_et_al:LIPIcs.ICALP.2022.54,
  author =	{Ding, Ming and Kyng, Rasmus and Zhang, Peng},
  title =	{{Two-Commodity Flow Is Equivalent to Linear Programming Under Nearly-Linear Time Reductions}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{54:1--54:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.54},
  URN =		{urn:nbn:de:0030-drops-163950},
  doi =		{10.4230/LIPIcs.ICALP.2022.54},
  annote =	{Keywords: Two-Commodity Flow Problems, Linear Programming, Fine-Grained Complexity}
}
Document
Training (Overparametrized) Neural Networks in Near-Linear Time

Authors: Jan van den Brand, Binghui Peng, Zhao Song, and Omri Weinstein

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
The slow convergence rate and pathological curvature issues of first-order gradient methods for training deep neural networks, initiated an ongoing effort for developing faster second-order optimization algorithms beyond SGD, without compromising the generalization error. Despite their remarkable convergence rate (independent of the training batch size n), second-order algorithms incur a daunting slowdown in the cost per iteration (inverting the Hessian matrix of the loss function), which renders them impractical. Very recently, this computational overhead was mitigated by the works of [Zhang et al., 2019; Cai et al., 2019], yielding an O(mn²)-time second-order algorithm for training two-layer overparametrized neural networks of polynomial width m. We show how to speed up the algorithm of [Cai et al., 2019], achieving an Õ(mn)-time backpropagation algorithm for training (mildly overparametrized) ReLU networks, which is near-linear in the dimension (mn) of the full gradient (Jacobian) matrix. The centerpiece of our algorithm is to reformulate the Gauss-Newton iteration as an 𝓁₂-regression problem, and then use a Fast-JL type dimension reduction to precondition the underlying Gram matrix in time independent of M, allowing to find a sufficiently good approximate solution via first-order conjugate gradient. Our result provides a proof-of-concept that advanced machinery from randomized linear algebra - which led to recent breakthroughs in convex optimization (ERM, LPs, Regression) - can be carried over to the realm of deep learning as well.

Cite as

Jan van den Brand, Binghui Peng, Zhao Song, and Omri Weinstein. Training (Overparametrized) Neural Networks in Near-Linear Time. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 63:1-63:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{vandenbrand_et_al:LIPIcs.ITCS.2021.63,
  author =	{van den Brand, Jan and Peng, Binghui and Song, Zhao and Weinstein, Omri},
  title =	{{Training (Overparametrized) Neural Networks in Near-Linear Time}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{63:1--63:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.63},
  URN =		{urn:nbn:de:0030-drops-136025},
  doi =		{10.4230/LIPIcs.ITCS.2021.63},
  annote =	{Keywords: Deep learning theory, Nonconvex optimization}
}
Document
Space Hardness of Solving Structured Linear Systems

Authors: Xuangui Huang

Published in: LIPIcs, Volume 181, 31st International Symposium on Algorithms and Computation (ISAAC 2020)


Abstract
Space-efficient Laplacian solvers are closely related to derandomization of space-bound randomized computations. We show that if the probabilistic logarithmic-space solver or the deterministic nearly logarithmic-space solver for undirected Laplacian matrices can be extended to solve slightly larger subclasses of linear systems, then they can be used to solve all linear systems with similar space complexity. Previously Kyng and Zhang [Rasmus Kyng and Peng Zhang, 2017] proved such results in the time complexity setting using reductions between approximate solvers. We prove that their reductions can be implemented using constant-depth, polynomial-size threshold circuits.

Cite as

Xuangui Huang. Space Hardness of Solving Structured Linear Systems. In 31st International Symposium on Algorithms and Computation (ISAAC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 181, pp. 56:1-56:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{huang:LIPIcs.ISAAC.2020.56,
  author =	{Huang, Xuangui},
  title =	{{Space Hardness of Solving Structured Linear Systems}},
  booktitle =	{31st International Symposium on Algorithms and Computation (ISAAC 2020)},
  pages =	{56:1--56:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-173-3},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{181},
  editor =	{Cao, Yixin and Cheng, Siu-Wing and Li, Minming},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2020.56},
  URN =		{urn:nbn:de:0030-drops-134001},
  doi =		{10.4230/LIPIcs.ISAAC.2020.56},
  annote =	{Keywords: linear system solver, logarithmic space, threshold circuit}
}
Document
Invited Talk
Convex Optimization and Dynamic Data Structure (Invited Talk)

Authors: Yin Tat Lee

Published in: LIPIcs, Volume 182, 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020)


Abstract
In the last three years, there are many breakthroughs in optimization such as nearly quadratic time algorithms for bipartite matching, linear programming algorithms that are as fast as Ax = b. All of these algorithms are based on a careful combination of optimization techniques and dynamic data structures. In this talk, we will explain the framework underlying all the recent breakthroughs. Joint work with Jan van den Brand, Michael B. Cohen, Sally Dong, Haotian Jiang, Tarun Kathuria, Danupon Nanongkai, Swati Padmanabhan, Richard Peng, Thatchaphol Saranurak, Aaron Sidford, Zhao Song, Di Wang, Sam Chiu-wai Wong, Guanghao Ye, Qiuyi Zhang.

Cite as

Yin Tat Lee. Convex Optimization and Dynamic Data Structure (Invited Talk). In 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 182, p. 3:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{lee:LIPIcs.FSTTCS.2020.3,
  author =	{Lee, Yin Tat},
  title =	{{Convex Optimization and Dynamic Data Structure}},
  booktitle =	{40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020)},
  pages =	{3:1--3:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-174-0},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{182},
  editor =	{Saxena, Nitin and Simon, Sunil},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2020.3},
  URN =		{urn:nbn:de:0030-drops-132440},
  doi =		{10.4230/LIPIcs.FSTTCS.2020.3},
  annote =	{Keywords: Convex Optimization, Dynamic Data Structure}
}
Document
On Adaptivity Gaps of Influence Maximization Under the Independent Cascade Model with Full-Adoption Feedback

Authors: Wei Chen and Binghui Peng

Published in: LIPIcs, Volume 149, 30th International Symposium on Algorithms and Computation (ISAAC 2019)


Abstract
In this paper, we study the adaptivity gap of the influence maximization problem under the independent cascade model when full-adoption feedback is available. Our main results are to derive upper bounds on several families of well-studied influence graphs, including in-arborescences, out-arborescences and bipartite graphs. Especially, we prove that the adaptivity gap for the in-arborescences is between [e/(e-1), 2e/(e-1)], and for the out-arborescences the gap is between [e/(e-1), 2]. These are the first constant upper bounds in the full-adoption feedback model. Our analysis provides several novel ideas to tackle the correlated feedback appearing in adaptive stochastic optimization, which may be of independent interest.

Cite as

Wei Chen and Binghui Peng. On Adaptivity Gaps of Influence Maximization Under the Independent Cascade Model with Full-Adoption Feedback. In 30th International Symposium on Algorithms and Computation (ISAAC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 149, pp. 24:1-24:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ISAAC.2019.24,
  author =	{Chen, Wei and Peng, Binghui},
  title =	{{On Adaptivity Gaps of Influence Maximization Under the Independent Cascade Model with Full-Adoption Feedback}},
  booktitle =	{30th International Symposium on Algorithms and Computation (ISAAC 2019)},
  pages =	{24:1--24:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-130-6},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{149},
  editor =	{Lu, Pinyan and Zhang, Guochuan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2019.24},
  URN =		{urn:nbn:de:0030-drops-115208},
  doi =		{10.4230/LIPIcs.ISAAC.2019.24},
  annote =	{Keywords: Adaptive influence maximization, adaptivity gap, full-adoption feedback}
}
Document
A (1.4 + epsilon)-Approximation Algorithm for the 2-Max-Duo Problem

Authors: Yao Xu, Yong Chen, Guohui Lin, Tian Liu, Taibo Luo, and Peng Zhang

Published in: LIPIcs, Volume 92, 28th International Symposium on Algorithms and Computation (ISAAC 2017)


Abstract
The maximum duo-preservation string mapping (Max-Duo) problem is the complement of the well studied minimum common string partition (MCSP) problem, both of which have applications in many fields including text compression and bioinformatics. k-Max-Duo is the restricted version of Max-Duo, where every letter of the alphabet occurs at most k times in each of the strings, which is readily reduced into the well known maximum independent set (MIS) problem on a graph of maximum degree \Delta \le 6(k-1). In particular, 2-Max-Duo can then be approximated arbitrarily close to 1.8 using the state-of-the-art approximation algorithm for the MIS problem. 2-Max-Duo was proved APX-hard and very recently a (1.6 + \epsilon)-approximation was claimed, for any \epsilon > 0. In this paper, we present a vertex-degree reduction technique, based on which, we show that 2-Max-Duo can be approximated arbitrarily close to 1.4.

Cite as

Yao Xu, Yong Chen, Guohui Lin, Tian Liu, Taibo Luo, and Peng Zhang. A (1.4 + epsilon)-Approximation Algorithm for the 2-Max-Duo Problem. In 28th International Symposium on Algorithms and Computation (ISAAC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 92, pp. 66:1-66:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{xu_et_al:LIPIcs.ISAAC.2017.66,
  author =	{Xu, Yao and Chen, Yong and Lin, Guohui and Liu, Tian and Luo, Taibo and Zhang, Peng},
  title =	{{A (1.4 + epsilon)-Approximation Algorithm for the 2-Max-Duo Problem}},
  booktitle =	{28th International Symposium on Algorithms and Computation (ISAAC 2017)},
  pages =	{66:1--66:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-054-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{92},
  editor =	{Okamoto, Yoshio and Tokuyama, Takeshi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2017.66},
  URN =		{urn:nbn:de:0030-drops-82120},
  doi =		{10.4230/LIPIcs.ISAAC.2017.66},
  annote =	{Keywords: Approximation algorithm, duo-preservation string mapping, string partition, independent set}
}
Document
Approximating the Solution to Mixed Packing and Covering LPs in Parallel O˜(epsilon^{-3}) Time

Authors: Michael W. Mahoney, Satish Rao, Di Wang, and Peng Zhang

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
We study the problem of approximately solving positive linear programs (LPs). This class of LPs models a wide range of fundamental problems in combinatorial optimization and operations research, such as many resource allocation problems, solving non-negative linear systems, computing tomography, single/multi commodity flows on graphs, etc. For the special cases of pure packing or pure covering LPs, recent result by Allen-Zhu and Orecchia [Allen/Zhu/Orecchia, SODA'15] gives O˜(1/(epsilon^3))-time parallel algorithm, which breaks the longstanding O˜(1/(epsilon^4)) running time bound by the seminal work of Luby and Nisan [Luby/Nisan, STOC'93]. We present new parallel algorithm with running time O˜(1/(epsilon^3)) for the more general mixed packing and covering LPs, which improves upon the O˜(1/(epsilon^4))-time algorithm of Young [Young, FOCS'01; Young, arXiv 2014]. Our work leverages the ideas from both the optimization oriented approach [Allen/Zhu/Orecchia, SODA'15; Wang/Mahoney/Mohan/Rao, arXiv 2015], as well as the more combinatorial approach with phases [Young, FOCS'01; Young, arXiv 2014]. In addition, our algorithm, when directly applied to pure packing or pure covering LPs, gives a improved running time of O˜(1/(epsilon^2)).

Cite as

Michael W. Mahoney, Satish Rao, Di Wang, and Peng Zhang. Approximating the Solution to Mixed Packing and Covering LPs in Parallel O˜(epsilon^{-3}) Time. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 52:1-52:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{mahoney_et_al:LIPIcs.ICALP.2016.52,
  author =	{Mahoney, Michael W. and Rao, Satish and Wang, Di and Zhang, Peng},
  title =	{{Approximating the Solution to Mixed Packing and Covering LPs in Parallel O˜(epsilon^\{-3\}) Time}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{52:1--52:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.52},
  URN =		{urn:nbn:de:0030-drops-63335},
  doi =		{10.4230/LIPIcs.ICALP.2016.52},
  annote =	{Keywords: Mixed packing and covering, Linear program, Approximation algorithm, Parallel algorithm}
}
  • Refine by Author
  • 6 Zhang, Peng
  • 3 Ding, Ming
  • 2 Kyng, Rasmus
  • 2 Peng, Binghui
  • 2 Song, Zhao
  • Show More...

  • Refine by Classification
  • 4 Theory of computation → Problems, reductions and completeness
  • 2 Mathematics of computing → Algebraic topology
  • 2 Mathematics of computing → Computations on matrices
  • 2 Theory of computation → Design and analysis of algorithms
  • 2 Theory of computation → Nonconvex optimization
  • Show More...

  • Refine by Keyword
  • 2 Approximation algorithm
  • 2 Deep learning theory
  • 2 Fine-Grained Complexity
  • 2 Nonconvex optimization
  • 2 Simplicial Complexes
  • Show More...

  • Refine by Type
  • 13 document

  • Refine by Publication Year
  • 4 2022
  • 2 2020
  • 2 2023
  • 1 2016
  • 1 2017
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail