14 Search Results for "Garg, Vijay K."


Document
Practical Minimum Path Cover

Authors: Manuel Cáceres, Brendan Mumey, Santeri Toivonen, and Alexandru I. Tomescu

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
Computing a minimum path cover (MPC) of a directed acyclic graph (DAG) is a fundamental problem with a myriad of applications, including reachability. Although it is known how to solve the problem by a simple reduction to minimum flow, recent theoretical advances exploit this idea to obtain algorithms parameterized by the number of paths of an MPC, known as the width. These results obtain fast [Mäkinen et al., TALG 2019] and even linear time [Cáceres et al., SODA 2022] algorithms in the small-width regime. In this paper, we present the first publicly available high-performance implementation of state-of-the-art MPC algorithms, including the parameterized approaches. Our experiments on random DAGs show that parameterized algorithms are orders-of-magnitude faster on dense graphs. Additionally, we present new fast pre-processing heuristics based on transitive edge sparsification. We show that our heuristics improve MPC-solvers by orders of magnitude.

Cite as

Manuel Cáceres, Brendan Mumey, Santeri Toivonen, and Alexandru I. Tomescu. Practical Minimum Path Cover. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 3:1-3:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{caceres_et_al:LIPIcs.SEA.2024.3,
  author =	{C\'{a}ceres, Manuel and Mumey, Brendan and Toivonen, Santeri and Tomescu, Alexandru I.},
  title =	{{Practical Minimum Path Cover}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{3:1--3:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.3},
  URN =		{urn:nbn:de:0030-drops-203687},
  doi =		{10.4230/LIPIcs.SEA.2024.3},
  annote =	{Keywords: minimum path cover, directed acyclic graph, maximum flow, parameterized algorithms, edge sparsification, algorithm engineering}
}
Document
Track A: Algorithms, Complexity and Games
An O(loglog n)-Approximation for Submodular Facility Location

Authors: Fateme Abbasi, Marek Adamczyk, Miguel Bosch-Calvo, Jarosław Byrka, Fabrizio Grandoni, Krzysztof Sornat, and Antoine Tinguely

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
In the Submodular Facility Location problem (SFL) we are given a collection of n clients and m facilities in a metric space. A feasible solution consists of an assignment of each client to some facility. For each client, one has to pay the distance to the associated facility. Furthermore, for each facility f to which we assign the subset of clients S^f, one has to pay the opening cost g(S^f), where g() is a monotone submodular function with g(emptyset)=0. SFL is APX-hard since it includes the classical (metric uncapacitated) Facility Location problem (with uniform facility costs) as a special case. Svitkina and Tardos [SODA'06] gave the current-best O(log n) approximation algorithm for SFL. The same authors pose the open problem whether SFL admits a constant approximation and provide such an approximation for a very restricted special case of the problem. We make some progress towards the solution of the above open problem by presenting an O(loglog n) approximation. Our approach is rather flexible and can be easily extended to generalizations and variants of SFL. In more detail, we achieve the same approximation factor for the natural generalizations of SFL where the opening cost of each facility f is of the form p_f + g(S^f) or w_f * g(S^f), where p_f, w_f >= 0 are input values. We also obtain an improved approximation algorithm for the related Universal Stochastic Facility Location problem. In this problem one is given a classical (metric) facility location instance and has to a priori assign each client to some facility. Then a subset of active clients is sampled from some given distribution, and one has to pay (a posteriori) only the connection and opening costs induced by the active clients. The expected opening cost of each facility f can be modelled with a submodular function of the set of clients assigned to f.

Cite as

Fateme Abbasi, Marek Adamczyk, Miguel Bosch-Calvo, Jarosław Byrka, Fabrizio Grandoni, Krzysztof Sornat, and Antoine Tinguely. An O(loglog n)-Approximation for Submodular Facility Location. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 5:1-5:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abbasi_et_al:LIPIcs.ICALP.2024.5,
  author =	{Abbasi, Fateme and Adamczyk, Marek and Bosch-Calvo, Miguel and Byrka, Jaros{\l}aw and Grandoni, Fabrizio and Sornat, Krzysztof and Tinguely, Antoine},
  title =	{{An O(loglog n)-Approximation for Submodular Facility Location}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.5},
  URN =		{urn:nbn:de:0030-drops-201488},
  doi =		{10.4230/LIPIcs.ICALP.2024.5},
  annote =	{Keywords: approximation algorithms, facility location, submodular facility location, universal stochastic facility location}
}
Document
Track A: Algorithms, Complexity and Games
Fully-Scalable MPC Algorithms for Clustering in High Dimension

Authors: Artur Czumaj, Guichen Gao, Shaofeng H.-C. Jiang, Robert Krauthgamer, and Pavel Veselý

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We design new parallel algorithms for clustering in high-dimensional Euclidean spaces. These algorithms run in the Massively Parallel Computation (MPC) model, and are fully scalable, meaning that the local memory in each machine may be n^σ for arbitrarily small fixed σ > 0. Importantly, the local memory may be substantially smaller than the number of clusters k, yet all our algorithms are fast, i.e., run in O(1) rounds. We first devise a fast MPC algorithm for O(1)-approximation of uniform Facility Location. This is the first fully-scalable MPC algorithm that achieves O(1)-approximation for any clustering problem in general geometric setting; previous algorithms only provide poly(log n)-approximation or apply to restricted inputs, like low dimension or small number of clusters k; e.g. [Bhaskara and Wijewardena, ICML'18; Cohen-Addad et al., NeurIPS'21; Cohen-Addad et al., ICML'22]. We then build on this Facility Location result and devise a fast MPC algorithm that achieves O(1)-bicriteria approximation for k-Median and for k-Means, namely, it computes (1+ε)k clusters of cost within O(1/ε²)-factor of the optimum for k clusters. A primary technical tool that we introduce, and may be of independent interest, is a new MPC primitive for geometric aggregation, namely, computing for every data point a statistic of its approximate neighborhood, for statistics like range counting and nearest-neighbor search. Our implementation of this primitive works in high dimension, and is based on consistent hashing (aka sparse partition), a technique that was recently used for streaming algorithms [Czumaj et al., FOCS'22].

Cite as

Artur Czumaj, Guichen Gao, Shaofeng H.-C. Jiang, Robert Krauthgamer, and Pavel Veselý. Fully-Scalable MPC Algorithms for Clustering in High Dimension. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 50:1-50:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{czumaj_et_al:LIPIcs.ICALP.2024.50,
  author =	{Czumaj, Artur and Gao, Guichen and Jiang, Shaofeng H.-C. and Krauthgamer, Robert and Vesel\'{y}, Pavel},
  title =	{{Fully-Scalable MPC Algorithms for Clustering in High Dimension}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{50:1--50:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.50},
  URN =		{urn:nbn:de:0030-drops-201938},
  doi =		{10.4230/LIPIcs.ICALP.2024.50},
  annote =	{Keywords: Massively parallel computing, high dimension, facility location, k-median, k-means}
}
Document
Track A: Algorithms, Complexity and Games
A Note on Approximating Weighted Nash Social Welfare with Additive Valuations

Authors: Yuda Feng and Shi Li

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We give the first O(1)-approximation for the weighted Nash Social Welfare problem with additive valuations. The approximation ratio we obtain is e^{1/e} + ε ≈ 1.445 + ε, which matches the best known approximation ratio for the unweighted case [Barman et al., 2018]. Both our algorithm and analysis are simple. We solve a natural configuration LP for the problem, and obtain the allocation of items to agents using a randomized version of the Shmoys-Tardos rounding algorithm developed for unrelated machine scheduling problems [Shmoys and Tardos, 1993]. In the analysis, we show that the approximation ratio of the algorithm is at most the worst gap between the Nash social welfare of the optimum allocation and that of an EF1 allocation, for an unweighted Nash Social Welfare instance with identical additive valuations. This was shown to be at most e^{1/e} ≈ 1.445 by Barman et al. [Barman et al., 2018], leading to our approximation ratio.

Cite as

Yuda Feng and Shi Li. A Note on Approximating Weighted Nash Social Welfare with Additive Valuations. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 63:1-63:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{feng_et_al:LIPIcs.ICALP.2024.63,
  author =	{Feng, Yuda and Li, Shi},
  title =	{{A Note on Approximating Weighted Nash Social Welfare with Additive Valuations}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{63:1--63:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.63},
  URN =		{urn:nbn:de:0030-drops-202068},
  doi =		{10.4230/LIPIcs.ICALP.2024.63},
  annote =	{Keywords: Nash Social Welfare, Configuration LP, Approximation Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Streaming Algorithms for Connectivity Augmentation

Authors: Ce Jin, Michael Kapralov, Sepideh Mahabadi, and Ali Vakilian

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We study the k-connectivity augmentation problem (k-CAP) in the single-pass streaming model. Given a (k-1)-edge connected graph G = (V,E) that is stored in memory, and a stream of weighted edges (also called links) L with weights in {0,1,… ,W}, the goal is to choose a minimum weight subset L' ⊆ L of the links such that G' = (V,E∪ L') is k-edge connected. We give a (2+ε)-approximation algorithm for this problem which requires to store O(ε^{-1} nlog n) words. Moreover, we show the tightness of our result: Any algorithm with better than 2-approximation for the problem requires Ω(n²) bits of space even when k = 2. This establishes a gap between the optimal approximation factor one can obtain in the streaming vs the offline setting for k-CAP. We further consider a natural generalization to the fully streaming model where both E and L arrive in the stream in an arbitrary order. We show that this problem has a space lower bound that matches the best possible size of a spanner of the same approximation ratio. Following this, we give improved results for spanners on weighted graphs: We show a streaming algorithm that finds a (2t-1+ε)-approximate weighted spanner of size at most O(ε^{-1} n^{1+1/t}log n) for integer t, whereas the best prior streaming algorithm for spanner on weighted graphs had size depending on log W. We believe that this result is of independent interest. Using our spanner result, we provide an optimal O(t)-approximation for k-CAP in the fully streaming model with O(nk + n^{1+1/t}) words of space. Finally we apply our results to network design problems such as Steiner tree augmentation problem (STAP), k-edge connected spanning subgraph (k-ECSS) and the general Survivable Network Design problem (SNDP). In particular, we show a single-pass O(tlog k)-approximation for SNDP using O(kn^{1+1/t}) words of space, where k is the maximum connectivity requirement.

Cite as

Ce Jin, Michael Kapralov, Sepideh Mahabadi, and Ali Vakilian. Streaming Algorithms for Connectivity Augmentation. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 93:1-93:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{jin_et_al:LIPIcs.ICALP.2024.93,
  author =	{Jin, Ce and Kapralov, Michael and Mahabadi, Sepideh and Vakilian, Ali},
  title =	{{Streaming Algorithms for Connectivity Augmentation}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{93:1--93:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.93},
  URN =		{urn:nbn:de:0030-drops-202367},
  doi =		{10.4230/LIPIcs.ICALP.2024.93},
  annote =	{Keywords: streaming algorithms, connectivity augmentation}
}
Document
Parallel and Distributed Algorithms for the Housing Allocation Problem

Authors: Xiong Zheng and Vijay K. Garg

Published in: LIPIcs, Volume 153, 23rd International Conference on Principles of Distributed Systems (OPODIS 2019)


Abstract
We propose parallel and distributed algorithms for the housing allocation problem. In this problem, there is a set of agents and a set of houses. Each agent has a strict preference list for a subset of houses. We need to find a matching for agents to houses such that some criterion is optimized. One such criterion which has attracted much attention is Pareto optimality. A matching is Pareto optimal if no coalition of agents can be strictly better off by exchanging houses among themselves. We also study the housing market problem, a variant of the housing allocation problem, where each agent initially owns a house. In addition to Pareto optimality, we are also interested in finding the a matching in the core of a housing market. A matching is in the core if there is no coalition of agents that can be better off by breaking away from other agents and switching houses only among themselves in the initial allocation. In the first part of this work, we show that computing a Pareto optimal matching of a house allocation is in CC and computing a matching in the core of a housing market is CC-hard, where CC is the class of problems logspace reducible to the comparator circuit value problem. Given a matching of agents to houses, we show that verifying whether it is Pareto optimal is in NC. We also show that verifying whether it is in the core can be done in NC. We then give an algorithm to show that computing a maximum cardinality Pareto optimal matching for the housing allocation problem is in RNC² and quasi-NC². In the second part of this work, we present a distributed version of the top trading cycle algorithm for finding a matching in the core of a housing market. To that end, we first present two algorithms for finding all the disjoint cycles in a functional graph. The first algorithm is a Las Vegas algorithm which terminates in O(log l) rounds with high probability, where l is the length of the longest cycle. The second algorithm is a deterministic algorithm which terminates in O(log* n log l) rounds, where n is the number of nodes in the graph. Both algorithms work in the synchronous distributed model and use messages of size O(log n). By applying these two algorithms for finding cycles in a functional graph, we give the distributed top trading cycle algorithm which terminates in O(n) rounds and requires O(n²) messages.

Cite as

Xiong Zheng and Vijay K. Garg. Parallel and Distributed Algorithms for the Housing Allocation Problem. In 23rd International Conference on Principles of Distributed Systems (OPODIS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 153, pp. 23:1-23:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{zheng_et_al:LIPIcs.OPODIS.2019.23,
  author =	{Zheng, Xiong and Garg, Vijay K.},
  title =	{{Parallel and Distributed Algorithms for the Housing Allocation Problem}},
  booktitle =	{23rd International Conference on Principles of Distributed Systems (OPODIS 2019)},
  pages =	{23:1--23:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-133-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{153},
  editor =	{Felber, Pascal and Friedman, Roy and Gilbert, Seth and Miller, Avery},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2019.23},
  URN =		{urn:nbn:de:0030-drops-118090},
  doi =		{10.4230/LIPIcs.OPODIS.2019.23},
  annote =	{Keywords: Parallel Algorithm, Distributed Algorithm, Housing Allocation, Housing Markets, Pareto optimality}
}
Document
Linearizable Replicated State Machines With Lattice Agreement

Authors: Xiong Zheng, Vijay K. Garg, and John Kaippallimalil

Published in: LIPIcs, Volume 153, 23rd International Conference on Principles of Distributed Systems (OPODIS 2019)


Abstract
This paper studies the lattice agreement problem in asynchronous systems and explores its application to building a linearizable replicated state machine (RSM). First, we propose an algorithm to solve the lattice agreement problem in O(log f) asynchronous rounds, where f is the number of crash failures that the system can tolerate. This is an exponential improvement over the previous best upper bound of O(f). Second, Faleiro et al have shown in [Faleiro et al. PODC, 2012] that combination of conflict-free data types and lattice agreement protocols can be applied to implement a linearizable RSM. They give a Paxos style lattice agreement protocol, which can be adapted to implement a linearizable RSM and guarantee that a command by a client can be learned in at most O(n) message delays, where n is the number of proposers. Later, Xiong et al in [Xiong et al. DISC, 2018] gave a lattice agreement protocol which improves the O(n) message delay guarantee to O(f). However, neither of the protocols is practical for building a linearizable RSM. Thus, in the second part of the paper, we first give an improved protocol based on the one proposed by Xiong et al. Then, we implement a simple linearizable RSM using our improved protocol and compare our implementation with an open source Java implementation of Paxos. Results show that better performance can be obtained by using lattice agreement based protocols to implement a linearizable RSM compared to traditional consensus based protocols.

Cite as

Xiong Zheng, Vijay K. Garg, and John Kaippallimalil. Linearizable Replicated State Machines With Lattice Agreement. In 23rd International Conference on Principles of Distributed Systems (OPODIS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 153, pp. 29:1-29:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{zheng_et_al:LIPIcs.OPODIS.2019.29,
  author =	{Zheng, Xiong and Garg, Vijay K. and Kaippallimalil, John},
  title =	{{Linearizable Replicated State Machines With Lattice Agreement}},
  booktitle =	{23rd International Conference on Principles of Distributed Systems (OPODIS 2019)},
  pages =	{29:1--29:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-133-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{153},
  editor =	{Felber, Pascal and Friedman, Roy and Gilbert, Seth and Miller, Avery},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2019.29},
  URN =		{urn:nbn:de:0030-drops-118158},
  doi =		{10.4230/LIPIcs.OPODIS.2019.29},
  annote =	{Keywords: Lattice Agreement, Generalized Lattice Agreement, Replicated State Machine, Consensus}
}
Document
Lattice Agreement in Message Passing Systems

Authors: Xiong Zheng, Changyong Hu, and Vijay K. Garg

Published in: LIPIcs, Volume 121, 32nd International Symposium on Distributed Computing (DISC 2018)


Abstract
This paper studies the lattice agreement problem and the generalized lattice agreement problem in distributed message passing systems. In the lattice agreement problem, given input values from a lattice, processes have to non-trivially decide output values that lie on a chain. We consider the lattice agreement problem in both synchronous and asynchronous systems. For synchronous lattice agreement, we present two algorithms which run in log(f) and min{O(log^2 h(L)), O(log^2 f)} rounds, respectively, where h(L) denotes the height of the input sublattice L, f < n is the number of crash failures the system can tolerate, and n is the number of processes in the system. These algorithms have significant better round complexity than previously known algorithms. The algorithm by Attiya et al. [Attiya et al. DISC, 1995] takes log(n) synchronous rounds, and the algorithm by Mavronicolasa [Mavronicolasa, 2018] takes min{O(h(L)), O(sqrt(f))} rounds. For asynchronous lattice agreement, we propose an algorithm which has time complexity of 2*min{h(L), f + 1} message delays which improves on the previously known time complexity of O(n) message delays. The generalized lattice agreement problem defined by Faleiro et al in [Faleiro et al. PODC, 2012] is a generalization of the lattice agreement problem where it is applied for the replicated state machine. We propose an algorithm which guarantees liveness when a majority of the processes are correct in asynchronous systems. Our algorithm requires min{O(h(L)), O(f)} units of time in the worst case which is better than O(n) units of time required by the algorithm in [Faleiro et al. PODC, 2012].

Cite as

Xiong Zheng, Changyong Hu, and Vijay K. Garg. Lattice Agreement in Message Passing Systems. In 32nd International Symposium on Distributed Computing (DISC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 121, pp. 41:1-41:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{zheng_et_al:LIPIcs.DISC.2018.41,
  author =	{Zheng, Xiong and Hu, Changyong and Garg, Vijay K.},
  title =	{{Lattice Agreement in Message Passing Systems}},
  booktitle =	{32nd International Symposium on Distributed Computing (DISC 2018)},
  pages =	{41:1--41:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-092-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{121},
  editor =	{Schmid, Ulrich and Widder, Josef},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2018.41},
  URN =		{urn:nbn:de:0030-drops-98301},
  doi =		{10.4230/LIPIcs.DISC.2018.41},
  annote =	{Keywords: Lattice Agreement, Replicated State Machine, Consensus}
}
Document
Fast Detection of Stable and Count Predicates in Parallel Computations

Authors: Himanshu Chauhan and Vijay K. Garg

Published in: LIPIcs, Volume 95, 21st International Conference on Principles of Distributed Systems (OPODIS 2017)


Abstract
Enumerating all consistent states of a parallel computation that satisfy a given predicate is an important problem in debugging and verification of parallel programs. We give a fast algorithm to enumerate all consistent states of a parallel computation that satisfy a stable predicate. In addi- tion, we define a new category of global predicates called count predicates and give an algorithm to enumerate all consistent states (of the computation) that satisfy it. All existing predicate detection algorithms, such as BFS, DFS and Lex algorithms, do not exploit the knowledge about the nature of the predicates, and thus may visit all global states of the computation in the worst case. In comparison, our algorithms only visit the states that satisfy the given predicate, and thus take time and space that is a polynomial function of the number of states of interest. In doing so, they provide a significant reduction — exponential in many cases — in time complexities in comparison to existing algorithms.

Cite as

Himanshu Chauhan and Vijay K. Garg. Fast Detection of Stable and Count Predicates in Parallel Computations. In 21st International Conference on Principles of Distributed Systems (OPODIS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 95, pp. 20:1-20:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chauhan_et_al:LIPIcs.OPODIS.2017.20,
  author =	{Chauhan, Himanshu and Garg, Vijay K.},
  title =	{{Fast Detection of Stable and Count Predicates in Parallel Computations}},
  booktitle =	{21st International Conference on Principles of Distributed Systems (OPODIS 2017)},
  pages =	{20:1--20:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-061-3},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{95},
  editor =	{Aspnes, James and Bessani, Alysson and Felber, Pascal and Leit\~{a}o, Jo\~{a}o},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2017.20},
  URN =		{urn:nbn:de:0030-drops-86496},
  doi =		{10.4230/LIPIcs.OPODIS.2017.20},
  annote =	{Keywords: Algorithms, Theory, Predicate Detection, Parallel Programs}
}
Document
Brief Announcement
Brief Announcement: Applying Predicate Detection to the Stable Marriage Problem

Authors: Vijay K. Garg

Published in: LIPIcs, Volume 91, 31st International Symposium on Distributed Computing (DISC 2017)


Abstract
We show that many techniques developed in the context of predicate detection are applicable to the stable marriage problem. The standard Gale-Shapley algorithm can be derived as a special case of detecting linear predicates. We also show that techniques in computation slicing can be used to represent the set of all constrained stable matchings.

Cite as

Vijay K. Garg. Brief Announcement: Applying Predicate Detection to the Stable Marriage Problem. In 31st International Symposium on Distributed Computing (DISC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 91, pp. 52:1-52:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{garg:LIPIcs.DISC.2017.52,
  author =	{Garg, Vijay K.},
  title =	{{Brief Announcement: Applying Predicate Detection to the Stable Marriage Problem}},
  booktitle =	{31st International Symposium on Distributed Computing (DISC 2017)},
  pages =	{52:1--52:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-053-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{91},
  editor =	{Richa, Andr\'{e}a},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2017.52},
  URN =		{urn:nbn:de:0030-drops-79719},
  doi =		{10.4230/LIPIcs.DISC.2017.52},
  annote =	{Keywords: Stable Matching, Linear Predicates, Distributive Lattices}
}
Document
Predicate Detection for Parallel Computations with Locking Constraints

Authors: Yen-Jung Chang and Vijay K. Garg

Published in: LIPIcs, Volume 70, 20th International Conference on Principles of Distributed Systems (OPODIS 2016)


Abstract
The happened-before model (or the poset model) has been widely used for modeling the computations (execution traces) of parallel programs and detecting predicates (user-specified conditions). This model captures potential causality as well as locking constraints among the executed events of computations using Lamport's happened-before relation. The detection of a predicate in a computation is performed by checking if the predicate could become true in any reachable global state of the computation. In this paper, we argue that locking constraints are fundamentally different from potential causality. Hence, a poset is not an appropriate model for debugging purposes when the computations contain locking constraints. We present a model called Locking Poset, or a Loset, that generalizes the poset model for locking constraints. Just as a poset captures possibly an exponential number of total orders, a loset captures possibly an exponential number of posets. Therefore, detecting a predicate in a loset is equivalent to detecting the predicate in all corresponding posets. Since determining if a global state is reachable in a computation is a fundamental problem for detecting predicates, this paper first studies the reachability problem in the loset model. We show that the problem is NP-complete. Afterwards, we introduce a subset of reachable global states called lock-free feasible global states such that we can check whether a global state is lock-free feasible in polynomial time. Moreover, we show that lock-free feasible global states can act as "reset" points for reachability and be used to drastically reduce the time for determining the reachability of other global states. We also introduce strongly feasible global states that contain all reachable global states and show that the strong feasibility of a global state can be checked in polynomial time. We show that strong feasibility provides an effective approximation of reachability for many practical applications.

Cite as

Yen-Jung Chang and Vijay K. Garg. Predicate Detection for Parallel Computations with Locking Constraints. In 20th International Conference on Principles of Distributed Systems (OPODIS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 70, pp. 17:1-17:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chang_et_al:LIPIcs.OPODIS.2016.17,
  author =	{Chang, Yen-Jung and Garg, Vijay K.},
  title =	{{Predicate Detection for Parallel Computations with Locking Constraints}},
  booktitle =	{20th International Conference on Principles of Distributed Systems (OPODIS 2016)},
  pages =	{17:1--17:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-031-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{70},
  editor =	{Fatourou, Panagiota and Jim\'{e}nez, Ernesto and Pedone, Fernando},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2016.17},
  URN =		{urn:nbn:de:0030-drops-70867},
  doi =		{10.4230/LIPIcs.OPODIS.2016.17},
  annote =	{Keywords: predicate detection, parallel computations, reachable global states, locking constraints, happened-before relation}
}
Document
QuickLex: A Fast Algorithm for Consistent Global States Enumeration of Distributed Computations

Authors: Yen-Jung Chang and Vijay K. Garg

Published in: LIPIcs, Volume 46, 19th International Conference on Principles of Distributed Systems (OPODIS 2015)


Abstract
Verifying the correctness of executions of concurrent and distributed programs is difficult because they show nondeterministic behavior due to different process scheduling order. Predicate detection can alleviate this problem by predicting whether the user-specified condition (predicate) could have become true in any global state of the given concurrent or distributed computation. The method is predictive because it generates inferred global states from the observed execution path and then checks if those global states satisfy the predicate. An important part of the predicate detection method is global states enumeration, which generates the consistent global states, including the inferred ones, of the given computation. Cooper and Marzullo gave the first enumeration algorithm based on a breadth first strategy (BFS). Later, many algorithms have been proposed to improve the space and time complexity. Among the existing algorithms, the Tree algorithm due to Jegou et al. has the smallest time complexity and requires O(|P|) space, which is linear to the size of the computation P. In this paper, we present a fast algorithm, QuickLex, to enumerate global states in the lexical order. QuickLex requires much smaller space than O(|P|). From our experiments, the Tree algorithm requires 2-10 times more memory space than QuickLex. Moreover, QuickLex is 4 times faster than Tree even though the asymptotic time complexity of QuickLex is higher than that of Tree. The reason is that the worst case time complexity of QuickLex happens only in computations that are not common in practice. Moreover, Tree is built on linked-lists and QuickLex can be implemented using integer arrays. In comparison with the existing lexical algorithm (Lex), QuickLex is 7 times faster and uses almost the same amount of memory as Lex. Finally, we implement a parallel-and-online predicate detector for concurrent programs using QuickLex, which can detect data races and violation of invariants in the programs.

Cite as

Yen-Jung Chang and Vijay K. Garg. QuickLex: A Fast Algorithm for Consistent Global States Enumeration of Distributed Computations. In 19th International Conference on Principles of Distributed Systems (OPODIS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 46, pp. 25:1-25:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chang_et_al:LIPIcs.OPODIS.2015.25,
  author =	{Chang, Yen-Jung and Garg, Vijay K.},
  title =	{{QuickLex: A Fast Algorithm for Consistent Global States Enumeration of Distributed Computations}},
  booktitle =	{19th International Conference on Principles of Distributed Systems (OPODIS 2015)},
  pages =	{25:1--25:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-98-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{46},
  editor =	{Anceaume, Emmanuelle and Cachin, Christian and Potop-Butucaru, Maria},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2015.25},
  URN =		{urn:nbn:de:0030-drops-66142},
  doi =		{10.4230/LIPIcs.OPODIS.2015.25},
  annote =	{Keywords: consistent global state, algorithm, computation}
}
Document
ActiveMonitor: Asynchronous Monitor Framework for Scalability and Multi-Object Synchronization

Authors: Wei-Lun Hung, Himanshu Chauhan, and Vijay K. Garg

Published in: LIPIcs, Volume 46, 19th International Conference on Principles of Distributed Systems (OPODIS 2015)


Abstract
Monitor objects are used extensively for thread-safety and synchronization in shared memory parallel programs. They provide ease of use, and enable straightforward correctness analysis. However, they inhibit parallelism by enforcing serial executions of critical sections, and thus the performance of parallel programs with monitors scales poorly with number of processes. Their current design and implementation is also ill-suited for thread synchronization across multiple thread-safe objects. We present ActiveMonitor - a framework that allows multi-object synchronization without global locks, and improves parallelism by exploiting asynchronous execution of critical sections. We evaluate the performance of Java based implementation of ActiveMonitor on micro-benchmarks involving light and heavy critical sections, as well as on single-source-shortest-path problem in directed graphs. Our results show that on most of these problems, ActiveMonitor based programs outperform programs implemented using Java's reentrant-lock and condition constructs.

Cite as

Wei-Lun Hung, Himanshu Chauhan, and Vijay K. Garg. ActiveMonitor: Asynchronous Monitor Framework for Scalability and Multi-Object Synchronization. In 19th International Conference on Principles of Distributed Systems (OPODIS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 46, pp. 29:1-29:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{hung_et_al:LIPIcs.OPODIS.2015.29,
  author =	{Hung, Wei-Lun and Chauhan, Himanshu and Garg, Vijay K.},
  title =	{{ActiveMonitor: Asynchronous Monitor Framework for Scalability and Multi-Object Synchronization}},
  booktitle =	{19th International Conference on Principles of Distributed Systems (OPODIS 2015)},
  pages =	{29:1--29:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-98-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{46},
  editor =	{Anceaume, Emmanuelle and Cachin, Christian and Potop-Butucaru, Maria},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2015.29},
  URN =		{urn:nbn:de:0030-drops-66188},
  doi =		{10.4230/LIPIcs.OPODIS.2015.29},
  annote =	{Keywords: concurrent/parallel programming, monitors, concurrency}
}
Document
Approximate Hypergraph Coloring under Low-discrepancy and Related Promises

Authors: Vijay V. S. P. Bhattiprolu, Venkatesan Guruswami, and Euiwoong Lee

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
A hypergraph is said to be X-colorable if its vertices can be colored with X colors so that no hyperedge is monochromatic. 2-colorability is a fundamental property (called Property B) of hypergraphs and is extensively studied in combinatorics. Algorithmically, however, given a 2-colorable k-uniform hypergraph, it is NP-hard to find a 2-coloring miscoloring fewer than a fraction 2^(-k+1) of hyperedges (which is trivially achieved by a random 2-coloring), and the best algorithms to color the hypergraph properly require about n^(1-1/k) colors, approaching the trivial bound of n as k increases. In this work, we study the complexity of approximate hypergraph coloring, for both the maximization (finding a 2-coloring with fewest miscolored edges) and minimization (finding a proper coloring using fewest number of colors) versions, when the input hypergraph is promised to have the following stronger properties than 2-colorability: (A) Low-discrepancy: If the hypergraph has a 2-coloring of discrepancy l << sqrt(k), we give an algorithm to color the hypergraph with about n^(O(l^2/k)) colors. However, for the maximization version, we prove NP-hardness of finding a 2-coloring miscoloring a smaller than 2^(-O(k)) (resp. k^(-O(k))) fraction of the hyperedges when l = O(log k) (resp. l=2). Assuming the Unique Games conjecture, we improve the latter hardness factor to 2^(-O(k)) for almost discrepancy-1 hypergraphs. (B) Rainbow colorability: If the hypergraph has a (k-l)-coloring such that each hyperedge is polychromatic with all these colors (this is stronger than a (l+1)-discrepancy 2-coloring), we give a 2-coloring algorithm that miscolors at most k^(-Omega(k)) of the hyperedges when l << sqrt(k), and complement this with a matching Unique Games hardness result showing that when l = sqrt(k), it is hard to even beat the 2^(-k+1) bound achieved by a random coloring. (C) Strong Colorability: We obtain similar (stronger) Min- and Max-2-Coloring algorithmic results in the case of (k+l)-strong colorability.

Cite as

Vijay V. S. P. Bhattiprolu, Venkatesan Guruswami, and Euiwoong Lee. Approximate Hypergraph Coloring under Low-discrepancy and Related Promises. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 152-174, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{bhattiprolu_et_al:LIPIcs.APPROX-RANDOM.2015.152,
  author =	{Bhattiprolu, Vijay V. S. P. and Guruswami, Venkatesan and Lee, Euiwoong},
  title =	{{Approximate Hypergraph Coloring under Low-discrepancy and Related Promises}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{152--174},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.152},
  URN =		{urn:nbn:de:0030-drops-53011},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.152},
  annote =	{Keywords: Hypergraph Coloring, Discrepancy, Rainbow Coloring, Stong Coloring, Algorithms, Semidefinite Programming, Hardness of Approximation}
}
  • Refine by Author
  • 8 Garg, Vijay K.
  • 3 Zheng, Xiong
  • 2 Chang, Yen-Jung
  • 2 Chauhan, Himanshu
  • 1 Abbasi, Fateme
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Approximation algorithms analysis
  • 2 Theory of computation → Distributed algorithms
  • 2 Theory of computation → Facility location and clustering
  • 2 Theory of computation → Graph algorithms analysis
  • 1 Theory of computation → Massively parallel algorithms
  • Show More...

  • Refine by Keyword
  • 2 Algorithms
  • 2 Consensus
  • 2 Lattice Agreement
  • 2 Replicated State Machine
  • 2 facility location
  • Show More...

  • Refine by Type
  • 14 document

  • Refine by Publication Year
  • 5 2024
  • 2 2016
  • 2 2017
  • 2 2018
  • 2 2020
  • Show More...