47 Search Results for "Khanna, Sanjeev"


Document
Approximating Maximum-Size Properly Colored Forests

Authors: Yuhang Bai, Kristóf Bérczi, Gergely Csáji, and Tamás Schwarcz

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In the Properly Colored Spanning Tree problem, we are given an edge-colored undirected graph and the goal is to find a properly colored spanning tree. The problem is interesting not only from a graph coloring point of view, but is also closely related to the Degree Bounded Spanning Tree and (1,2)-Traveling Salesman problems. We propose an optimization version called Maximum-size Properly Colored Forest problem, which aims to find a properly colored forest with as many edges as possible. We consider the problem in different graph classes and for different numbers of colors, and present polynomial-time approximation algorithms as well as inapproximability results for these settings. We also consider the Maximum-size Properly Colored Tree problem asking for the maximum size of a properly colored tree not necessarily spanning all the vertices. We show that the optimum is significantly more difficult to approximate than in the forest case, and provide an approximation algorithm for complete multigraphs.

Cite as

Yuhang Bai, Kristóf Bérczi, Gergely Csáji, and Tamás Schwarcz. Approximating Maximum-Size Properly Colored Forests. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 14:1-14:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bai_et_al:LIPIcs.ESA.2024.14,
  author =	{Bai, Yuhang and B\'{e}rczi, Krist\'{o}f and Cs\'{a}ji, Gergely and Schwarcz, Tam\'{a}s},
  title =	{{Approximating Maximum-Size Properly Colored Forests}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{14:1--14:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.14},
  URN =		{urn:nbn:de:0030-drops-210858},
  doi =		{10.4230/LIPIcs.ESA.2024.14},
  annote =	{Keywords: Approximation algorithm, (1,2)-traveling salesman problem, Degree bounded spanning tree, Properly colored forest}
}
Document
A Euclidean Embedding for Computing Persistent Homology with Gaussian Kernels

Authors: Jean-Daniel Boissonnat and Kunal Dutta

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
Computing persistent homology of large datasets using Gaussian kernels is useful in the domains of topological data analysis and machine learning as shown by Phillips, Wang and Zheng [SoCG 2015]. However, unlike in the case of persistent homology computation using the Euclidean distance or the k-distance, using Gaussian kernels involves significantly higher overhead, as all distance computations are in terms of the Gaussian kernel distance which is computationally more expensive. Further, most algorithmic implementations (e.g. Gudhi, Ripser, etc.) are based on Euclidean distances, so the question of finding a Euclidean embedding - preferably low-dimensional - that preserves the persistent homology computed with Gaussian kernels, is quite important. We consider the Gaussian kernel power distance (GKPD) given by Phillips, Wang and Zheng. Given an n-point dataset and a relative error parameter {ε} ∈ (0,1], we show that the persistent homology of the {Čech } filtration of the dataset computed using the GKPD can be approximately preserved using O({ε}^{-2}log n) dimensions, under a high stable rank condition. Our results also extend to the Delaunay filtration and the (simpler) case of the weighted Rips filtrations constructed using the GKPD. Compared to the Euclidean embedding for the Gaussian kernel function in ∼ n dimensions, which uses the Cholesky decomposition of the matrix of the kernel function applied to all pairs of data points, our embedding may also be viewed as dimensionality reduction - reducing the dimensionality from n to ∼ log n dimensions. Our proof utilizes the embedding of Chen and Phillips [ALT 2017], based on the Random Fourier Functions of Rahimi and Recht [NeurIPS 2007], together with two novel ingredients. The first one is a new decomposition of the squared radii of {Čech } simplices computed using the GKPD, in terms of the pairwise GKPDs between the vertices, which we state and prove. The second is a new concentration inequality for sums of cosine functions of Gaussian random vectors, which we call Gaussian cosine chaoses. We believe these are of independent interest and will find other applications in future.

Cite as

Jean-Daniel Boissonnat and Kunal Dutta. A Euclidean Embedding for Computing Persistent Homology with Gaussian Kernels. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 29:1-29:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{boissonnat_et_al:LIPIcs.ESA.2024.29,
  author =	{Boissonnat, Jean-Daniel and Dutta, Kunal},
  title =	{{A Euclidean Embedding for Computing Persistent Homology with Gaussian Kernels}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{29:1--29:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.29},
  URN =		{urn:nbn:de:0030-drops-211009},
  doi =		{10.4230/LIPIcs.ESA.2024.29},
  annote =	{Keywords: Persistent homology, Gaussian kernels, Random Fourier Features, Euclidean embedding}
}
Document
Improved Algorithms for Maximum Coverage in Dynamic and Random Order Streams

Authors: Amit Chakrabarti, Andrew McGregor, and Anthony Wirth

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
The maximum coverage problem is to select k sets, from a collection of m sets, such that the cardinality of their union, in a universe of size n, is maximized. We consider (1-1/e-ε)-approximation algorithms for this NP-hard problem in three standard data stream models. 1) Dynamic Model. The stream consists of a sequence of sets being inserted and deleted. Our multi-pass algorithm uses ε^{-2} k ⋅ polylog(n,m) space. The best previous result (Assadi and Khanna, SODA 2018) used (n +ε^{-4} k) polylog(n,m) space. While both algorithms use O(ε^{-1} log m) passes, our analysis shows that, when ε ≤ 1/log log m, it is possible to reduce the number of passes by a 1/log log m factor without incurring additional space. 2) Random Order Model. In this model, there are no deletions, and the sets forming the instance are uniformly randomly permuted to form the input stream. We show that a single pass and k polylog(n,m) space suffices for arbitrary small constant ε. The best previous result, by Warneke et al. (ESA 2023), used k² polylog(n,m) space. 3) Insert-Only Model. Lastly, our results, along with numerous previous results, use a sub-sampling technique introduced by McGregor and Vu (ICDT 2017) to sparsify the input instance. We explain how this technique and others used in the paper can be implemented such that the amortized update time of our algorithm is polylogarithmic. This also implies an improvement of the state-of-the-art insert only algorithms in terms of the update time: polylog(m,n) update time suffices, whereas the best previous result by Jaud et al. (SEA 2023) required update time that was linear in k.

Cite as

Amit Chakrabarti, Andrew McGregor, and Anthony Wirth. Improved Algorithms for Maximum Coverage in Dynamic and Random Order Streams. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 40:1-40:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chakrabarti_et_al:LIPIcs.ESA.2024.40,
  author =	{Chakrabarti, Amit and McGregor, Andrew and Wirth, Anthony},
  title =	{{Improved Algorithms for Maximum Coverage in Dynamic and Random Order Streams}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{40:1--40:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.40},
  URN =		{urn:nbn:de:0030-drops-211114},
  doi =		{10.4230/LIPIcs.ESA.2024.40},
  annote =	{Keywords: Data Stream Computation, Maximum Coverage, Submodular Maximization}
}
Document
Approximation Algorithms for Hop Constrained and Buy-At-Bulk Network Design via Hop Constrained Oblivious Routing

Authors: Chandra Chekuri and Rhea Jain

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We consider two-cost network design models in which edges of the input graph have an associated cost and length. We build upon recent advances in hop-constrained oblivious routing to obtain two sets of results. We address multicommodity buy-at-bulk network design in the nonuniform setting. Existing poly-logarithmic approximations are based on the junction tree approach [Chekuri et al., 2010; Guy Kortsarz and Zeev Nutov, 2011]. We obtain a new polylogarithmic approximation via a natural LP relaxation. This establishes an upper bound on its integrality gap and affirmatively answers an open question raised in [Chekuri et al., 2010]. The rounding is based on recent results in hop-constrained oblivious routing [Ghaffari et al., 2021], and this technique yields a polylogarithmic approximation in more general settings such as set connectivity. Our algorithm for buy-at-bulk network design is based on an LP-based reduction to h-hop constrained network design for which we obtain LP-based bicriteria approximation algorithms. We also consider a fault-tolerant version of h-hop constrained network design where one wants to design a low-cost network to guarantee short paths between a given set of source-sink pairs even when k-1 edges can fail. This model has been considered in network design [Luis Gouveia and Markus Leitner, 2017; Gouveia et al., 2018; Arslan et al., 2020] but no approximation algorithms were known. We obtain polylogarithmic bicriteria approximation algorithms for the single-source setting for any fixed k. We build upon the single-source algorithm and the junction-tree approach to obtain an approximation algorithm for the multicommodity setting when at most one edge can fail.

Cite as

Chandra Chekuri and Rhea Jain. Approximation Algorithms for Hop Constrained and Buy-At-Bulk Network Design via Hop Constrained Oblivious Routing. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 41:1-41:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chekuri_et_al:LIPIcs.ESA.2024.41,
  author =	{Chekuri, Chandra and Jain, Rhea},
  title =	{{Approximation Algorithms for Hop Constrained and Buy-At-Bulk Network Design via Hop Constrained Oblivious Routing}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{41:1--41:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.41},
  URN =		{urn:nbn:de:0030-drops-211124},
  doi =		{10.4230/LIPIcs.ESA.2024.41},
  annote =	{Keywords: Buy-at-bulk, Hop-constrained network design, LP integrality gap, Fault-tolerant network design}
}
Document
APPROX
On the Generalized Mean Densest Subgraph Problem: Complexity and Algorithms

Authors: Karthekeyan Chandrasekaran, Chandra Chekuri, Manuel R. Torres, and Weihao Zhu

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
Dense subgraph discovery is an important problem in graph mining and network analysis with several applications. Two canonical polynomial-time solvable problems here are to find a maxcore (subgraph of maximum min degree) and to find a densest subgraph (subgraph of maximum average degree). Both of these problems can be solved in polynomial time. Veldt, Benson, and Kleinberg [Veldt et al., 2021] introduced the generalized p-mean densest subgraph problem which captures the maxcore problem when p = -∞ and the densest subgraph problem when p = 1. They observed that for p ≥ 1, the objective function is supermodular and hence the problem can be solved in polynomial time. In this work, we focus on the p-mean densest subgraph problem for p ∈ (-∞, 1). We prove that for every p ∈ (-∞,1), the problem is NP-hard, thus resolving an open question from [Veldt et al., 2021]. We also show that for every p ∈ (0,1), the weighted version of the problem is APX-hard. On the algorithmic front, we describe two simple 1/2-approximation algorithms for every p ∈ (-∞, 1). We complement the approximation algorithms by exhibiting non-trivial instances on which the algorithms simultaneously achieve an approximation factor of at most 1/2.

Cite as

Karthekeyan Chandrasekaran, Chandra Chekuri, Manuel R. Torres, and Weihao Zhu. On the Generalized Mean Densest Subgraph Problem: Complexity and Algorithms. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 9:1-9:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chandrasekaran_et_al:LIPIcs.APPROX/RANDOM.2024.9,
  author =	{Chandrasekaran, Karthekeyan and Chekuri, Chandra and Torres, Manuel R. and Zhu, Weihao},
  title =	{{On the Generalized Mean Densest Subgraph Problem: Complexity and Algorithms}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{9:1--9:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.9},
  URN =		{urn:nbn:de:0030-drops-210025},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.9},
  annote =	{Keywords: Densest subgraph problem, Hardness of approximation, Approximation algorithms}
}
Document
APPROX
Scheduling on a Stochastic Number of Machines

Authors: Moritz Buchem, Franziska Eberle, Hugo Kooki Kasuya Rosado, Kevin Schewior, and Andreas Wiese

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We consider a new scheduling problem on parallel identical machines in which the number of machines is initially not known, but it follows a given probability distribution. Only after all jobs are assigned to a given number of bags, the actual number of machines is revealed. Subsequently, the jobs need to be assigned to the machines without splitting the bags. This is the stochastic version of a related problem introduced by Stein and Zhong [SODA 2018, TALG 2020] and it is, for example, motivated by bundling jobs that need to be scheduled by data centers. We present two PTASs for the stochastic setting, computing job-to-bag assignments that (i) minimize the expected maximum machine load and (ii) maximize the expected minimum machine load (like in the Santa Claus problem), respectively. The former result follows by careful enumeration combined with known PTASs. For the latter result, we introduce an intricate dynamic program that we apply to a suitably rounded instance.

Cite as

Moritz Buchem, Franziska Eberle, Hugo Kooki Kasuya Rosado, Kevin Schewior, and Andreas Wiese. Scheduling on a Stochastic Number of Machines. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 14:1-14:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{buchem_et_al:LIPIcs.APPROX/RANDOM.2024.14,
  author =	{Buchem, Moritz and Eberle, Franziska and Kasuya Rosado, Hugo Kooki and Schewior, Kevin and Wiese, Andreas},
  title =	{{Scheduling on a Stochastic Number of Machines}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{14:1--14:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.14},
  URN =		{urn:nbn:de:0030-drops-210073},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.14},
  annote =	{Keywords: scheduling, approximation algorithms, stochastic machines, makespan, max-min fair allocation, dynamic programming}
}
Document
APPROX
Distributional Online Weighted Paging with Limited Horizon

Authors: Yaron Fairstein, Joseph (Seffi) Naor, and Tomer Tsachor

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
In this work we study the classic problem of online weighted paging with a probabilistic prediction model, in which we are given additional information about the input in the form of distributions over page requests, known as distributional online paging (DOP). This work continues a recent line of research on learning-augmented algorithms that incorporates machine-learning predictions in online algorithms, so as to go beyond traditional worst-case competitive analysis, thus circumventing known lower bounds for online paging. We first provide an efficient online algorithm that achieves a constant factor competitive ratio with respect to the best online algorithm (policy) for weighted DOP that follows from earlier work on the stochastic k-server problem. Our main contribution concerns the question of whether distributional information over a limited horizon suffices for obtaining a constant competitive factor. To this end, we define in a natural way a new predictive model with limited horizon, which we call Per-Request Stochastic Prediction (PRSP). We show that we can obtain a constant factor competitive algorithm with respect to the optimal online algorithm for this model.

Cite as

Yaron Fairstein, Joseph (Seffi) Naor, and Tomer Tsachor. Distributional Online Weighted Paging with Limited Horizon. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 15:1-15:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fairstein_et_al:LIPIcs.APPROX/RANDOM.2024.15,
  author =	{Fairstein, Yaron and Naor, Joseph (Seffi) and Tsachor, Tomer},
  title =	{{Distributional Online Weighted Paging with Limited Horizon}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{15:1--15:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.15},
  URN =		{urn:nbn:de:0030-drops-210088},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.15},
  annote =	{Keywords: Online algorithms, Caching, Stochastic analysis, Predictions}
}
Document
APPROX
Weighted Matching in the Random-Order Streaming and Robust Communication Models

Authors: Diba Hashemi and Weronika Wrzos-Kaminska

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We study the maximum weight matching problem in the random-order semi-streaming model and in the robust communication model. Unlike many other sublinear models, in these two frameworks, there is a large gap between the guarantees of the best known algorithms for the unweighted and weighted versions of the problem. In the random-order semi-streaming setting, the edges of an n-vertex graph arrive in a stream in a random order. The goal is to compute an approximate maximum weight matching with a single pass over the stream using O(npolylog n) space. Our main result is a (2/3-ε)-approximation algorithm for maximum weight matching in random-order streams, using space O(n log n log R), where R is the ratio between the heaviest and the lightest edge in the graph. Our result nearly matches the best known unweighted (2/3+ε₀)-approximation (where ε₀ ∼ 10^{-14} is a small constant) achieved by Assadi and Behnezhad [Assadi and Behnezhad, 2021], and significantly improves upon previous weighted results. Our techniques also extend to the related robust communication model, in which the edges of a graph are partitioned randomly between Alice and Bob. Alice sends a single message of size O(npolylog n) to Bob, who must compute an approximate maximum weight matching. We achieve a (5/6-ε)-approximation using O(n log n log R) words of communication, matching the results of Azarmehr and Behnezhad [Azarmehr and Behnezhad, 2023] for unweighted graphs.

Cite as

Diba Hashemi and Weronika Wrzos-Kaminska. Weighted Matching in the Random-Order Streaming and Robust Communication Models. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 16:1-16:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hashemi_et_al:LIPIcs.APPROX/RANDOM.2024.16,
  author =	{Hashemi, Diba and Wrzos-Kaminska, Weronika},
  title =	{{Weighted Matching in the Random-Order Streaming and Robust Communication Models}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{16:1--16:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.16},
  URN =		{urn:nbn:de:0030-drops-210097},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.16},
  annote =	{Keywords: Maximum Weight Matching, Streaming, Random-Order Streaming, Robust Communication Complexity}
}
Document
APPROX
Scheduling Splittable Jobs on Configurable Machines

Authors: Matthew Casey, Rajmohan Rajaraman, David Stalfa, and Cheng Tan

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
Motivated by modern architectures allowing for the partitioning of a GPU into hardware separated instances, we initiate the study of scheduling splittable jobs on configurable machines. We consider machines that can be configured into smaller instances, which we call blocks, in multiple ways, each of which is referred to as a configuration. We introduce the Configurable Machine Scheduling (cms) problem, where we are given n jobs and a set C of configurations. A schedule consists of a set of machines, each assigned some configuration in C with each block in the configuration assigned to process one job. The amount of a job’s demand that is satisfied by a block is given by an arbitrary function of the job and block. The objective is to construct a schedule using as few machines as possible. We provide a tight logarithmic factor approximation algorithm for this problem in the general setting, a factor (3 + ε) approximation algorithm for arbitrary ε > 0 when there are O(1) input configurations, and a polynomial time approximation scheme when both the number and size of configurations are O(1). Finally, we utilize a technique for finding conic integer combinations in fixed dimension to develop an optimal polynomial time algorithm in the case with O(1) jobs, O(1) blocks, and every configuration up to a given size.

Cite as

Matthew Casey, Rajmohan Rajaraman, David Stalfa, and Cheng Tan. Scheduling Splittable Jobs on Configurable Machines. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 22:1-22:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{casey_et_al:LIPIcs.APPROX/RANDOM.2024.22,
  author =	{Casey, Matthew and Rajaraman, Rajmohan and Stalfa, David and Tan, Cheng},
  title =	{{Scheduling Splittable Jobs on Configurable Machines}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{22:1--22:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.22},
  URN =		{urn:nbn:de:0030-drops-210157},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.22},
  annote =	{Keywords: Scheduling algorithms, Approximation algorithms, Configurable machines, Splittable jobs, Linear programming}
}
Document
APPROX
On Instance-Optimal Algorithms for a Generalization of Nuts and Bolts and Generalized Sorting

Authors: Mayank Goswami and Riko Jacob

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We generalize the classical nuts and bolts problem to a setting where the input is a collection of n nuts and m bolts, and there is no promise of any matching pairs. It is not allowed to compare a nut directly with a nut or a bolt directly with a bolt, and the goal is to perform the fewest nut-bolt comparisons to discover the partial order between the nuts and bolts. We term this problem bipartite sorting. We show that instances of bipartite sorting of the same size exhibit a wide range of complexity, and propose to perform a fine-grained analysis for this problem. We rule out straightforward notions of instance-optimality as being too stringent, and adopt a neighborhood-based definition. Our definition may be of independent interest as a unifying lens for instance-optimal algorithms for other static problems existing in literature. This includes problems like sorting (Estivill-Castro and Woods, ACM Comput. Surv. 1992), convex hull (Afshani, Barbay and Chan, JACM 2017), adaptive joins (Demaine, López-Ortiz and Munro, SODA 2000), and the recent concept of universal optimality for graphs (Haeupler, Hladík, Rozhoň, Tarjan and Tětek, 2023). As our main result on bipartite sorting, we give a randomized algorithm that is within a factor of O(log³(n+m)) of being instance-optimal w.h.p., with respect to the neighborhood-based definition. As our second contribution, we generalize bipartite sorting to DAG sorting, when the underlying DAG is not necessarily bipartite. As an unexpected consequence of a simple algorithm for DAG sorting, we rule out a potential lower bound on the widely-studied problem of sorting with priced information, posed by (Charikar, Fagin, Guruswami, Kleinberg, Raghavan and Sahai, STOC 2000). In this problem, comparing keys i and j has a known cost c_{ij} ∈ ℝ^+ ∪ {∞}, and the goal is to sort the keys in an instance-optimal way, by keeping the total cost of an algorithm as close as possible to ∑_{i=1}^{n-1} c_{x(i)x(i+1)}. Here x(1) < ⋯ < x(n) is the sorted order. While several special cases of cost functions have received a lot of attention in the community, no progress on the general version with arbitrary costs has been reported so far. One reason for this lack of progress seems to be a widely-cited Ω(n) lower bound on the competitive ratio for finding the maximum. This Ω(n) lower bound by (Gupta and Kumar, FOCS 2000) uses costs in {0,1,n, ∞}, and although not extended to sorting, this barrier seems to have stalled any progress on the general cost case. We rule out such a potential lower bound by showing the existence of an algorithm with a Õ(n^{3/4}) competitive ratio for the {0,1,n,∞} cost version. This generalizes the setting of generalized sorting proposed by (Huang, Kannan and Khanna, FOCS 2011), where the costs are either 1 or infinity, and the cost of the cheapest proof is always n-1.

Cite as

Mayank Goswami and Riko Jacob. On Instance-Optimal Algorithms for a Generalization of Nuts and Bolts and Generalized Sorting. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 23:1-23:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{goswami_et_al:LIPIcs.APPROX/RANDOM.2024.23,
  author =	{Goswami, Mayank and Jacob, Riko},
  title =	{{On Instance-Optimal Algorithms for a Generalization of Nuts and Bolts and Generalized Sorting}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{23:1--23:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.23},
  URN =		{urn:nbn:de:0030-drops-210168},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.23},
  annote =	{Keywords: Sorting, Priced Information, Instance Optimality, Nuts and Bolts}
}
Document
APPROX
An EPTAS for Cardinality Constrained Multiple Knapsack via Iterative Randomized Rounding

Authors: Ilan Doron-Arad, Ariel Kulik, and Hadas Shachnai

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
In [Math. Oper. Res., 2011], Fleischer et al. introduced a powerful technique for solving the generic class of separable assignment problems (SAP), in which a set of items of given values and weights needs to be packed into a set of bins subject to separable assignment constraints, so as to maximize the total value. The approach of Fleischer at al. relies on solving a configuration LP and sampling a configuration for each bin independently based on the LP solution. While there is a SAP variant for which this approach yields the best possible approximation ratio, for various special cases, there are discrepancies between the approximation ratios obtained using the above approach and the state-of-the-art approximations. This raises the following natural question: Can we do better by iteratively solving the configuration LP and sampling a few bins at a time? To assess the potential of the iterative approach we consider a specific SAP variant as a case-study, Uniform Cardinality Constrained Multiple Knapsack, for which we answer this question affirmatively. The input is a set of items, each has a value and a weight, and a set of uniform capacity bins. The goal is to assign a subset of the items of maximum total value to the bins such that (i) the capacity of any bin is not exceeded, and (ii) the number of items assigned to each bin satisfies a given cardinality constraint. While the technique of Fleischer et al. yields a (1-1/e)-approximation for the problem, we show that iterative randomized rounding leads to efficient polynomial time approximation scheme (EPTAS), thus essentially resolving the complexity status of the problem. Our analysis of iterative randomized rounding may be useful for solving other SAP variants.

Cite as

Ilan Doron-Arad, Ariel Kulik, and Hadas Shachnai. An EPTAS for Cardinality Constrained Multiple Knapsack via Iterative Randomized Rounding. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 27:1-27:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{doronarad_et_al:LIPIcs.APPROX/RANDOM.2024.27,
  author =	{Doron-Arad, Ilan and Kulik, Ariel and Shachnai, Hadas},
  title =	{{An EPTAS for Cardinality Constrained Multiple Knapsack via Iterative Randomized Rounding}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{27:1--27:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.27},
  URN =		{urn:nbn:de:0030-drops-210204},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.27},
  annote =	{Keywords: multiple knapsack, cardinality constraint, EPTAS, iterative randomized rounding}
}
Document
APPROX
Rectangle Tiling Binary Arrays

Authors: Pratik Ghosal, Syed Mohammad Meesum, and Katarzyna Paluch

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
The problem of rectangle tiling binary arrays is defined as follows. Given an n × n array A of zeros and ones and a natural number p, our task is to partition A into at most p rectangular tiles, so that the maximal weight of a tile is minimized. A tile is any rectangular subarray of A. The weight of a tile is the sum of elements that fall within it. We present a linear (O(n²)) time (3/2 + p²/w(A))-approximation algorithm for this problem, where w(A) denotes the weight of the whole array A. This improves on the previously known approximation with the ratio 2 when p²/w(A) < 1/2. The result is best possible in the following sense. The algorithm employs the lower bound of L = ⌈w(A)/p⌉, which is the only known and used bound on the optimum in all algorithms for rectangle tiling. We prove that a better approximation factor for the binary RTile cannot be achieved using L, because there exist arrays, whose every partition contains a tile with weight at least (3/2 + p/w(A))L. We also consider the dual problem of rectangle tiling for binary arrays, where we are given an upper bound on the weight of the tiles, and we have to cover the array A with the minimum number of non-overlapping tiles. Both problems have natural extensions to d-dimensional versions, for which we provide analogous results.

Cite as

Pratik Ghosal, Syed Mohammad Meesum, and Katarzyna Paluch. Rectangle Tiling Binary Arrays. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 28:1-28:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ghosal_et_al:LIPIcs.APPROX/RANDOM.2024.28,
  author =	{Ghosal, Pratik and Meesum, Syed Mohammad and Paluch, Katarzyna},
  title =	{{Rectangle Tiling Binary Arrays}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{28:1--28:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.28},
  URN =		{urn:nbn:de:0030-drops-210214},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.28},
  annote =	{Keywords: Rectangle Tiling, RTILE, DRTILE}
}
Document
RANDOM
Trace Reconstruction from Local Statistical Queries

Authors: Xi Chen, Anindya De, Chin Ho Lee, and Rocco A. Servedio

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
The goal of trace reconstruction is to reconstruct an unknown n-bit string x given only independent random traces of x, where a random trace of x is obtained by passing x through a deletion channel. A Statistical Query (SQ) algorithm for trace reconstruction is an algorithm which can only access statistical information about the distribution of random traces of x rather than individual traces themselves. Such an algorithm is said to be 𝓁-local if each of its statistical queries corresponds to an 𝓁-junta function over some block of 𝓁 consecutive bits in the trace. Since several - but not all - known algorithms for trace reconstruction fall under the local statistical query paradigm, it is interesting to understand the abilities and limitations of local SQ algorithms for trace reconstruction. In this paper we establish nearly-matching upper and lower bounds on local Statistical Query algorithms for both worst-case and average-case trace reconstruction. For the worst-case problem, we show that there is an Õ(n^{1/5})-local SQ algorithm that makes all its queries with tolerance τ ≥ 2^{-Õ(n^{1/5})}, and also that any Õ(n^{1/5})-local SQ algorithm must make some query with tolerance τ ≤ 2^{-Ω̃(n^{1/5})}. For the average-case problem, we show that there is an O(log n)-local SQ algorithm that makes all its queries with tolerance τ ≥ 1/poly(n), and also that any O(log n)-local SQ algorithm must make some query with tolerance τ ≤ 1/poly(n).

Cite as

Xi Chen, Anindya De, Chin Ho Lee, and Rocco A. Servedio. Trace Reconstruction from Local Statistical Queries. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 52:1-52:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.APPROX/RANDOM.2024.52,
  author =	{Chen, Xi and De, Anindya and Lee, Chin Ho and Servedio, Rocco A.},
  title =	{{Trace Reconstruction from Local Statistical Queries}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{52:1--52:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.52},
  URN =		{urn:nbn:de:0030-drops-210459},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.52},
  annote =	{Keywords: trace reconstruction, statistical queries, algorithmic statistics}
}
Document
Nearly-Tight Bounds for Flow Sparsifiers in Quasi-Bipartite Graphs

Authors: Syamantak Das, Nikhil Kumar, and Daniel Vaz

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
Flow sparsification is a classic graph compression technique which, given a capacitated graph G on k terminals, aims to construct another capacitated graph H, called a flow sparsifier, that preserves, either exactly or approximately, every multicommodity flow between terminals (ideally, with size as a small function of k). Cut sparsifiers are a restricted variant of flow sparsifiers which are only required to preserve maximum flows between bipartitions of the terminal set. It is known that exact cut sparsifiers require 2^Ω(k) many vertices [Krauthgamer and Rika, SODA 2013], with the hard instances being quasi-bipartite graphs, where there are no edges between non-terminals. On the other hand, it has been shown recently that exact (or even (1+ε)-approximate) flow sparsifiers on networks with just 6 terminals require unbounded size [Krauthgamer and Mosenzon, SODA 2023, Chen and Tan, SODA 2024]. In this paper, we construct exact flow sparsifiers of size 3^k³ and exact cut sparsifiers of size 2^k² for quasi-bipartite graphs. In particular, the flow sparsifiers are contraction-based, that is, they are obtained from the input graph by (vertex) contraction operations. Our main contribution is a new technique to construct sparsifiers that exploits connections to polyhedral geometry, and that can be generalized to graphs with a small separator that separates the graph into small components. We also give an improved reduction theorem for graphs of bounded treewidth [Andoni et al., SODA 2011], implying a flow sparsifier of size O(k⋅w) and quality O((log w)/log log w), where w is the treewidth.

Cite as

Syamantak Das, Nikhil Kumar, and Daniel Vaz. Nearly-Tight Bounds for Flow Sparsifiers in Quasi-Bipartite Graphs. In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 45:1-45:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{das_et_al:LIPIcs.MFCS.2024.45,
  author =	{Das, Syamantak and Kumar, Nikhil and Vaz, Daniel},
  title =	{{Nearly-Tight Bounds for Flow Sparsifiers in Quasi-Bipartite Graphs}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{45:1--45:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.45},
  URN =		{urn:nbn:de:0030-drops-206018},
  doi =		{10.4230/LIPIcs.MFCS.2024.45},
  annote =	{Keywords: Graph Sparsification, Cut Sparsifiers, Flow Sparsifiers, Quasi-bipartite Graphs, Bounded Treewidth}
}
Document
Fully-Adaptive Dynamic Connectivity of Square Intersection Graphs

Authors: Ivor van der Hoog, André Nusser, Eva Rotenberg, and Frank Staals

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
A classical problem in computational geometry and graph algorithms is: given a dynamic set 𝒮 of geometric shapes in the plane, efficiently maintain the connectivity of the intersection graph of 𝒮. Previous papers studied the setting where, before the updates, the data structure receives some parameter P. Then, updates could insert and delete disks as long as at all times the disks have a diameter that lies in a fixed range [1/P, 1]. As a consequence of that prerequisite, the aspect ratio ψ (i.e. the ratio between the largest and smallest diameter) of the disks would at all times satisfy ψ ≤ P. The state-of-the-art for storing disks in a dynamic connectivity data structure is a data structure that uses O(Pn) space and that has amortized O(P log⁴ n) expected amortized update time. Connectivity queries between disks are supported in O(log n / log log n) time. In the dynamic setting, one wishes for a more flexible data structure in which disks of any diameter may arrive and leave, independent of their diameter, changing the aspect ratio freely. Ideally, the aspect ratio should merely be part of the analysis. We restrict our attention to axis-aligned squares, and study fully-dynamic square intersection graph connectivity. Our result is fully-adaptive to the aspect ratio, spending time proportional to the current aspect ratio ψ, as opposed to some previously given maximum P. Our focus on squares allows us to simplify and streamline the connectivity pipeline from previous work. When n is the number of squares and ψ is the aspect ratio after insertion (or before deletion), our data structure answers connectivity queries in O(log n / log log n) time. We can update connectivity information in O(ψ log⁴ n + log⁶ n) amortized time. We also improve space usage from O(P ⋅ n log n) to O(n log³ n log ψ) - while generalizing to a fully-adaptive aspect ratio - which yields a space usage that is near-linear in n for any polynomially bounded ψ.

Cite as

Ivor van der Hoog, André Nusser, Eva Rotenberg, and Frank Staals. Fully-Adaptive Dynamic Connectivity of Square Intersection Graphs. In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 63:1-63:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{vanderhoog_et_al:LIPIcs.MFCS.2024.63,
  author =	{van der Hoog, Ivor and Nusser, Andr\'{e} and Rotenberg, Eva and Staals, Frank},
  title =	{{Fully-Adaptive Dynamic Connectivity of Square Intersection Graphs}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{63:1--63:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.63},
  URN =		{urn:nbn:de:0030-drops-206197},
  doi =		{10.4230/LIPIcs.MFCS.2024.63},
  annote =	{Keywords: Computational geometry, planar geometry, data structures, geometric intersection graphs, fully-dynamic algorithms}
}
  • Refine by Author
  • 12 Khanna, Sanjeev
  • 7 Assadi, Sepehr
  • 5 Chen, Yu
  • 3 Kapralov, Michael
  • 2 Behnezhad, Soheil
  • Show More...

  • Refine by Classification
  • 11 Theory of computation → Streaming, sublinear and near linear time algorithms
  • 9 Theory of computation → Graph algorithms analysis
  • 5 Theory of computation → Sparsification and spanners
  • 4 Theory of computation
  • 4 Theory of computation → Approximation algorithms analysis
  • Show More...

  • Refine by Keyword
  • 3 Sublinear Algorithms
  • 3 streaming algorithms
  • 2 Approximation algorithms
  • 2 Edge coloring
  • 2 Graph Algorithms
  • Show More...

  • Refine by Type
  • 47 document

  • Refine by Publication Year
  • 32 2024
  • 4 2020
  • 2 2021
  • 2 2022
  • 1 2008
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail