20 Search Results for "Dvor�k, Wolfgang"


Document
Well-Separation and Hyperplane Transversals in High Dimensions

Authors: Helena Bergold, Daniel Bertschinger, Nicolas Grelier, Wolfgang Mulzer, and Patrick Schnider

Published in: LIPIcs, Volume 227, 18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022)


Abstract
A family of k point sets in d dimensions is well-separated if the convex hulls of any two disjoint subfamilies can be separated by a hyperplane. Well-separation is a strong assumption that allows us to conclude that certain kinds of generalized ham-sandwich cuts for the point sets exist. But how hard is it to check if a given family of high-dimensional point sets has this property? Starting from this question, we study several algorithmic aspects of the existence of transversals and separations in high-dimensions. First, we give an explicit proof that k point sets are well-separated if and only if their convex hulls admit no (k - 2)-transversal, i.e., if there exists no (k - 2)-dimensional flat that intersects the convex hulls of all k sets. It follows that the task of checking well-separation lies in the complexity class coNP. Next, we show that it is NP-hard to decide whether there is a hyperplane-transversal (that is, a (d - 1)-transversal) of a family of d + 1 line segments in ℝ^d, where d is part of the input. As a consequence, it follows that the general problem of testing well-separation is coNP-complete. Furthermore, we show that finding a hyperplane that maximizes the number of intersected sets is NP-hard, but allows for an Ω((log k)/(k log log k))-approximation algorithm that is polynomial in d and k, when each set consists of a single point. When all point sets are finite, we show that checking whether there exists a (k - 2)-transversal is in fact strongly NP-complete. Finally, we take the viewpoint of parametrized complexity, using the dimension d as a parameter: given k convex sets in ℝ^d, checking whether there is a (k-2)-transversal is FPT with respect to d. On the other hand, for k ≥ d+1 finite point sets in ℝ^d, it turns out that checking whether there is a (d-1)-transversal is W[1]-hard with respect to d.

Cite as

Helena Bergold, Daniel Bertschinger, Nicolas Grelier, Wolfgang Mulzer, and Patrick Schnider. Well-Separation and Hyperplane Transversals in High Dimensions. In 18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 227, pp. 16:1-16:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bergold_et_al:LIPIcs.SWAT.2022.16,
  author =	{Bergold, Helena and Bertschinger, Daniel and Grelier, Nicolas and Mulzer, Wolfgang and Schnider, Patrick},
  title =	{{Well-Separation and Hyperplane Transversals in High Dimensions}},
  booktitle =	{18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022)},
  pages =	{16:1--16:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-236-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{227},
  editor =	{Czumaj, Artur and Xin, Qin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2022.16},
  URN =		{urn:nbn:de:0030-drops-161766},
  doi =		{10.4230/LIPIcs.SWAT.2022.16},
  annote =	{Keywords: hyperplane transversal, high-dimension, hardness}
}
Document
Nearest-Neighbor Decompositions of Drawings

Authors: Jonas Cleve, Nicolas Grelier, Kristin Knorr, Maarten Löffler, Wolfgang Mulzer, and Daniel Perz

Published in: LIPIcs, Volume 227, 18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022)


Abstract
Let 𝒟 be a set of straight-line segments in the plane, potentially crossing, and let c be a positive integer. We denote by P the union of the endpoints of the straight-line segments of 𝒟 and of the intersection points between pairs of segments. We say that 𝒟 has a nearest-neighbor decomposition into c parts if we can partition P into c point sets P₁, … , P_c such that 𝒟 is the union of the nearest neighbor graphs on P₁, … , P_c. We show that it is NP-complete to decide whether 𝒟 can be drawn as the union of c ≥ 3 nearest-neighbor graphs, even when no two segments cross. We show that for c = 2, it is NP-complete in the general setting and polynomial-time solvable when no two segments cross. We show the existence of an O(log n)-approximation algorithm running in subexponential time for partitioning 𝒟 into a minimum number of nearest-neighbor graphs. As a main tool in our analysis, we establish the notion of the conflict graph for a drawing 𝒟. The vertices of the conflict graph are the connected components of 𝒟, with the assumption that each connected component is the nearest neighbor graph of its vertices, and there is an edge between two components U and V if and only if the nearest neighbor graph of U ∪ V contains an edge between a vertex in U and a vertex in V. We show that string graphs are conflict graphs of certain planar drawings. For planar graphs and complete k-partite graphs, we give additional, more efficient constructions. We furthermore show that there are subdivisions of non-planar graphs that are not conflict graphs. Lastly, we show a separator lemma for conflict graphs.

Cite as

Jonas Cleve, Nicolas Grelier, Kristin Knorr, Maarten Löffler, Wolfgang Mulzer, and Daniel Perz. Nearest-Neighbor Decompositions of Drawings. In 18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 227, pp. 21:1-21:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{cleve_et_al:LIPIcs.SWAT.2022.21,
  author =	{Cleve, Jonas and Grelier, Nicolas and Knorr, Kristin and L\"{o}ffler, Maarten and Mulzer, Wolfgang and Perz, Daniel},
  title =	{{Nearest-Neighbor Decompositions of Drawings}},
  booktitle =	{18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022)},
  pages =	{21:1--21:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-236-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{227},
  editor =	{Czumaj, Artur and Xin, Qin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2022.21},
  URN =		{urn:nbn:de:0030-drops-161812},
  doi =		{10.4230/LIPIcs.SWAT.2022.21},
  annote =	{Keywords: nearest-neighbors, decompositions, drawing}
}
Document
No-Dimensional Tverberg Theorems and Algorithms

Authors: Aruni Choudhary and Wolfgang Mulzer

Published in: LIPIcs, Volume 164, 36th International Symposium on Computational Geometry (SoCG 2020)


Abstract
Tverberg’s theorem states that for any k ≥ 2 and any set P ⊂ ℝ^d of at least (d + 1)(k - 1) + 1 points, we can partition P into k subsets whose convex hulls have a non-empty intersection. The associated search problem lies in the complexity class PPAD ∩ PLS, but no hardness results are known. In the colorful Tverberg theorem, the points in P have colors, and under certain conditions, P can be partitioned into colorful sets, in which each color appears exactly once and whose convex hulls intersect. To date, the complexity of the associated search problem is unresolved. Recently, Adiprasito, Bárány, and Mustafa [SODA 2019] gave a no-dimensional Tverberg theorem, in which the convex hulls may intersect in an approximate fashion. This relaxes the requirement on the cardinality of P. The argument is constructive, but does not result in a polynomial-time algorithm. We present a deterministic algorithm that finds for any n-point set P ⊂ ℝ^d and any k ∈ {2, … , n} in O(nd ⌈log k⌉) time a k-partition of P such that there is a ball of radius O((k/√n)diam(P)) that intersects the convex hull of each set. Given that this problem is not known to be solvable exactly in polynomial time, and that there are no approximation algorithms that are truly polynomial in any dimension, our result provides a remarkably efficient and simple new notion of approximation. Our main contribution is to generalize Sarkaria’s method [Israel Journal Math., 1992] to reduce the Tverberg problem to the Colorful Carathéodory problem (in the simplified tensor product interpretation of Bárány and Onn) and to apply it algorithmically. It turns out that this not only leads to an alternative algorithmic proof of a no-dimensional Tverberg theorem, but it also generalizes to other settings such as the colorful variant of the problem.

Cite as

Aruni Choudhary and Wolfgang Mulzer. No-Dimensional Tverberg Theorems and Algorithms. In 36th International Symposium on Computational Geometry (SoCG 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 164, pp. 31:1-31:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{choudhary_et_al:LIPIcs.SoCG.2020.31,
  author =	{Choudhary, Aruni and Mulzer, Wolfgang},
  title =	{{No-Dimensional Tverberg Theorems and Algorithms}},
  booktitle =	{36th International Symposium on Computational Geometry (SoCG 2020)},
  pages =	{31:1--31:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-143-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{164},
  editor =	{Cabello, Sergio and Chen, Danny Z.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2020.31},
  URN =		{urn:nbn:de:0030-drops-121893},
  doi =		{10.4230/LIPIcs.SoCG.2020.31},
  annote =	{Keywords: Tverberg’s theorem, Colorful Carath\'{e}odory Theorem, Tensor lifting}
}
Document
On the Stretch Factor of Polygonal Chains

Authors: Ke Chen, Adrian Dumitrescu, Wolfgang Mulzer, and Csaba D. Tóth

Published in: LIPIcs, Volume 138, 44th International Symposium on Mathematical Foundations of Computer Science (MFCS 2019)


Abstract
Let P=(p_1, p_2, ..., p_n) be a polygonal chain. The stretch factor of P is the ratio between the total length of P and the distance of its endpoints, sum_{i = 1}^{n-1} |p_i p_{i+1}|/|p_1 p_n|. For a parameter c >= 1, we call P a c-chain if |p_ip_j|+|p_jp_k| <= c|p_ip_k|, for every triple (i,j,k), 1 <= i<j<k <= n. The stretch factor is a global property: it measures how close P is to a straight line, and it involves all the vertices of P; being a c-chain, on the other hand, is a fingerprint-property: it only depends on subsets of O(1) vertices of the chain. We investigate how the c-chain property influences the stretch factor in the plane: (i) we show that for every epsilon > 0, there is a noncrossing c-chain that has stretch factor Omega(n^{1/2-epsilon}), for sufficiently large constant c=c(epsilon); (ii) on the other hand, the stretch factor of a c-chain P is O(n^{1/2}), for every constant c >= 1, regardless of whether P is crossing or noncrossing; and (iii) we give a randomized algorithm that can determine, for a polygonal chain P in R^2 with n vertices, the minimum c >= 1 for which P is a c-chain in O(n^{2.5} polylog n) expected time and O(n log n) space.

Cite as

Ke Chen, Adrian Dumitrescu, Wolfgang Mulzer, and Csaba D. Tóth. On the Stretch Factor of Polygonal Chains. In 44th International Symposium on Mathematical Foundations of Computer Science (MFCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 138, pp. 56:1-56:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.MFCS.2019.56,
  author =	{Chen, Ke and Dumitrescu, Adrian and Mulzer, Wolfgang and T\'{o}th, Csaba D.},
  title =	{{On the Stretch Factor of Polygonal Chains}},
  booktitle =	{44th International Symposium on Mathematical Foundations of Computer Science (MFCS 2019)},
  pages =	{56:1--56:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-117-7},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{138},
  editor =	{Rossmanith, Peter and Heggernes, Pinar and Katoen, Joost-Pieter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2019.56},
  URN =		{urn:nbn:de:0030-drops-110005},
  doi =		{10.4230/LIPIcs.MFCS.2019.56},
  annote =	{Keywords: polygonal chain, vertex dilation, Koch curve, recursive construction}
}
Document
Near-Linear Time Algorithms for Streett Objectives in Graphs and MDPs

Authors: Krishnendu Chatterjee, Wolfgang Dvořák, Monika Henzinger, and Alexander Svozil

Published in: LIPIcs, Volume 140, 30th International Conference on Concurrency Theory (CONCUR 2019)


Abstract
The fundamental model-checking problem, given as input a model and a specification, asks for the algorithmic verification of whether the model satisfies the specification. Two classical models for reactive systems are graphs and Markov decision processes (MDPs). A basic specification formalism in the verification of reactive systems is the strong fairness (aka Streett) objective, where given different types of requests and corresponding grants, the requirement is that for each type, if the request event happens infinitely often, then the corresponding grant event must also happen infinitely often. All omega-regular objectives can be expressed as Streett objectives and hence they are canonical in verification. Consider graphs/MDPs with n vertices, m edges, and a Streett objectives with k pairs, and let b denote the size of the description of the Streett objective for the sets of requests and grants. The current best-known algorithm for the problem requires time O(min(n^2, m sqrt{m log n}) + b log n). In this work we present randomized near-linear time algorithms, with expected running time O~(m + b), where the O~ notation hides poly-log factors. Our randomized algorithms are near-linear in the size of the input, and hence optimal up to poly-log factors.

Cite as

Krishnendu Chatterjee, Wolfgang Dvořák, Monika Henzinger, and Alexander Svozil. Near-Linear Time Algorithms for Streett Objectives in Graphs and MDPs. In 30th International Conference on Concurrency Theory (CONCUR 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 140, pp. 7:1-7:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chatterjee_et_al:LIPIcs.CONCUR.2019.7,
  author =	{Chatterjee, Krishnendu and Dvo\v{r}\'{a}k, Wolfgang and Henzinger, Monika and Svozil, Alexander},
  title =	{{Near-Linear Time Algorithms for Streett Objectives in Graphs and MDPs}},
  booktitle =	{30th International Conference on Concurrency Theory (CONCUR 2019)},
  pages =	{7:1--7:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-121-4},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{140},
  editor =	{Fokkink, Wan and van Glabbeek, Rob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2019.7},
  URN =		{urn:nbn:de:0030-drops-109093},
  doi =		{10.4230/LIPIcs.CONCUR.2019.7},
  annote =	{Keywords: model checking, graph games, Streett games}
}
Document
Maintaining the Union of Unit Discs Under Insertions with Near-Optimal Overhead

Authors: Pankaj K. Agarwal, Ravid Cohen, Dan Halperin, and Wolfgang Mulzer

Published in: LIPIcs, Volume 129, 35th International Symposium on Computational Geometry (SoCG 2019)


Abstract
We present efficient data structures for problems on unit discs and arcs of their boundary in the plane. (i) We give an output-sensitive algorithm for the dynamic maintenance of the union of n unit discs under insertions in O(k log^2 n) update time and O(n) space, where k is the combinatorial complexity of the structural change in the union due to the insertion of the new disc. (ii) As part of the solution of (i) we devise a fully dynamic data structure for the maintenance of lower envelopes of pseudo-lines, which we believe is of independent interest. The structure has O(log^2 n) update time and O(log n) vertical ray shooting query time. To achieve this performance, we devise a new algorithm for finding the intersection between two lower envelopes of pseudo-lines in O(log n) time, using tentative binary search; the lower envelopes are special in that at x=-infty any pseudo-line contributing to the first envelope lies below every pseudo-line contributing to the second envelope. (iii) We also present a dynamic range searching structure for a set of circular arcs of unit radius (not necessarily on the boundary of the union of the corresponding discs), where the ranges are unit discs, with O(n log n) preprocessing time, O(n^{1/2+epsilon} + l) query time and O(log^2 n) amortized update time, where l is the size of the output and for any epsilon>0. The structure requires O(n) storage space.

Cite as

Pankaj K. Agarwal, Ravid Cohen, Dan Halperin, and Wolfgang Mulzer. Maintaining the Union of Unit Discs Under Insertions with Near-Optimal Overhead. In 35th International Symposium on Computational Geometry (SoCG 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 129, pp. 26:1-26:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{agarwal_et_al:LIPIcs.SoCG.2019.26,
  author =	{Agarwal, Pankaj K. and Cohen, Ravid and Halperin, Dan and Mulzer, Wolfgang},
  title =	{{Maintaining the Union of Unit Discs Under Insertions with Near-Optimal Overhead}},
  booktitle =	{35th International Symposium on Computational Geometry (SoCG 2019)},
  pages =	{26:1--26:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-104-7},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{129},
  editor =	{Barequet, Gill and Wang, Yusu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2019.26},
  URN =		{urn:nbn:de:0030-drops-104307},
  doi =		{10.4230/LIPIcs.SoCG.2019.26},
  annote =	{Keywords: lower envelopes, pseudo-lines, unit discs, range search, dynamic algorithms, tentative binary search}
}
Document
Approximate Minimum-Weight Matching with Outliers Under Translation

Authors: Pankaj K. Agarwal, Haim Kaplan, Geva Kipper, Wolfgang Mulzer, Günter Rote, Micha Sharir, and Allen Xiao

Published in: LIPIcs, Volume 123, 29th International Symposium on Algorithms and Computation (ISAAC 2018)


Abstract
Our goal is to compare two planar point sets by finding subsets of a given size such that a minimum-weight matching between them has the smallest weight. This can be done by a translation of one set that minimizes the weight of the matching. We give efficient algorithms (a) for finding approximately optimal matchings, when the cost of a matching is the L_p-norm of the tuple of the Euclidean distances between the pairs of matched points, for any p in [1,infty], and (b) for constructing small-size approximate minimization (or matching) diagrams: partitions of the translation space into regions, together with an approximate optimal matching for each region.

Cite as

Pankaj K. Agarwal, Haim Kaplan, Geva Kipper, Wolfgang Mulzer, Günter Rote, Micha Sharir, and Allen Xiao. Approximate Minimum-Weight Matching with Outliers Under Translation. In 29th International Symposium on Algorithms and Computation (ISAAC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 123, pp. 26:1-26:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{agarwal_et_al:LIPIcs.ISAAC.2018.26,
  author =	{Agarwal, Pankaj K. and Kaplan, Haim and Kipper, Geva and Mulzer, Wolfgang and Rote, G\"{u}nter and Sharir, Micha and Xiao, Allen},
  title =	{{Approximate Minimum-Weight Matching with Outliers Under Translation}},
  booktitle =	{29th International Symposium on Algorithms and Computation (ISAAC 2018)},
  pages =	{26:1--26:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-094-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{123},
  editor =	{Hsu, Wen-Lian and Lee, Der-Tsai and Liao, Chung-Shou},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2018.26},
  URN =		{urn:nbn:de:0030-drops-99747},
  doi =		{10.4230/LIPIcs.ISAAC.2018.26},
  annote =	{Keywords: Minimum-weight partial matching, Pattern matching, Approximation}
}
Document
Long Term Memory and the Densest K-Subgraph Problem

Authors: Robert Legenstein, Wolfgang Maass, Christos H. Papadimitriou, and Santosh S. Vempala

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
In a recent experiment, a cell in the human medial temporal lobe (MTL) encoding one sensory stimulus starts to also respond to a second stimulus following a combined experience associating the two. We develop a theoretical model predicting that an assembly of cells with exceptionally high synaptic intraconnectivity can emerge, in response to a particular sensory experience, to encode and abstract that experience. We also show that two such assemblies are modified to increase their intersection after a sensory event that associates the two corresponding stimuli. The main technical tools employed are random graph theory, and Bernoulli approximations. Assembly creation must overcome a computational challenge akin to the Densest K-Subgraph problem, namely selecting, from a large population of randomly and sparsely interconnected cells, a subset with exceptionally high density of interconnections. We identify three mechanisms that help achieve this feat in our model: (1) a simple two-stage randomized algorithm, and (2) the "triangle completion bias" in synaptic connectivity and a "birthday paradox", while (3) the strength of these connections is enhanced through Hebbian plasticity.

Cite as

Robert Legenstein, Wolfgang Maass, Christos H. Papadimitriou, and Santosh S. Vempala. Long Term Memory and the Densest K-Subgraph Problem. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 57:1-57:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{legenstein_et_al:LIPIcs.ITCS.2018.57,
  author =	{Legenstein, Robert and Maass, Wolfgang and Papadimitriou, Christos H. and Vempala, Santosh S.},
  title =	{{Long Term Memory and the Densest K-Subgraph Problem}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{57:1--57:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.57},
  URN =		{urn:nbn:de:0030-drops-83593},
  doi =		{10.4230/LIPIcs.ITCS.2018.57},
  annote =	{Keywords: Brain computation, long term memory, assemblies, association}
}
Document
Improved Set-Based Symbolic Algorithms for Parity Games

Authors: Krishnendu Chatterjee, Wolfgang Dvorák, Monika Henzinger, and Veronika Loitzenbauer

Published in: LIPIcs, Volume 82, 26th EACSL Annual Conference on Computer Science Logic (CSL 2017)


Abstract
Graph games with omega-regular winning conditions provide a mathematical framework to analyze a wide range of problems in the analysis of reactive systems and programs (such as the synthesis of reactive systems, program repair, and the verification of branching time properties). Parity conditions are canonical forms to specify omega-regular winning conditions. Graph games with parity conditions are equivalent to mu-calculus model checking, and thus a very important algorithmic problem. Symbolic algorithms are of great significance because they provide scalable algorithms for the analysis of large finite-state systems, as well as algorithms for the analysis of infinite-state systems with finite quotient. A set-based symbolic algorithm uses the basic set operations and the one-step predecessor operators. We consider graph games with n vertices and parity conditions with c priorities (equivalently, a mu-calculus formula with c alternations of least and greatest fixed points). While many explicit algorithms exist for graph games with parity conditions, for set-based symbolic algorithms there are only two algorithms (notice that we use space to refer to the number of sets stored by a symbolic algorithm): (a) the basic algorithm that requires O(n^c) symbolic operations and linear space; and (b) an improved algorithm that requires O(n^{c/2+1}) symbolic operations but also O(n^{c/2+1}) space (i.e., exponential space). In this work we present two set-based symbolic algorithms for parity games: (a) our first algorithm requires O(n^{c/2+1}) symbolic operations and only requires linear space; and (b) developing on our first algorithm, we present an algorithm that requires O(n^{c/3+1}) symbolic operations and only linear space. We also present the first linear space set-based symbolic algorithm for parity games that requires at most a sub-exponential number of symbolic operations.

Cite as

Krishnendu Chatterjee, Wolfgang Dvorák, Monika Henzinger, and Veronika Loitzenbauer. Improved Set-Based Symbolic Algorithms for Parity Games. In 26th EACSL Annual Conference on Computer Science Logic (CSL 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 82, pp. 18:1-18:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chatterjee_et_al:LIPIcs.CSL.2017.18,
  author =	{Chatterjee, Krishnendu and Dvor\'{a}k, Wolfgang and Henzinger, Monika and Loitzenbauer, Veronika},
  title =	{{Improved Set-Based Symbolic Algorithms for Parity Games}},
  booktitle =	{26th EACSL Annual Conference on Computer Science Logic (CSL 2017)},
  pages =	{18:1--18:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-045-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{82},
  editor =	{Goranko, Valentin and Dam, Mads},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CSL.2017.18},
  URN =		{urn:nbn:de:0030-drops-76830},
  doi =		{10.4230/LIPIcs.CSL.2017.18},
  annote =	{Keywords: model checking, graph games, parity games, symbolic computation, progress measure}
}
Document
Improved Time-Space Trade-Offs for Computing Voronoi Diagrams

Authors: Bahareh Banyassady, Matias Korman, Wolfgang Mulzer, André van Renssen, Marcel Roeloffzen, Paul Seiferth, and Yannik Stein

Published in: LIPIcs, Volume 66, 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)


Abstract
Let P be a planar n-point set in general position. For k between 1 and n-1, the Voronoi diagram of order k is obtained by subdividing the plane into regions such that points in the same cell have the same set of nearest k neighbors in P. The (nearest point) Voronoi diagram (NVD) and the farthest point Voronoi diagram (FVD) are the particular cases of k=1 and k=n-1, respectively. It is known that the family of all higher-order Voronoi diagrams of order 1 to K for P can be computed in total time O(n K^2 + n log n) using O(K^2(n-K)) space. Also NVD and FVD can be computed in O(n log n) time using O(n) space. For s in {1, ..., n}, an s-workspace algorithm has random access to a read-only array with the sites of P in arbitrary order. Additionally, the algorithm may use O(s) words of Theta(log n) bits each for reading and writing intermediate data. The output can be written only once and cannot be accessed afterwards. We describe a deterministic s-workspace algorithm for computing an NVD and also an FVD for P that runs in O((n^2/s) log s) time. Moreover, we generalize our s-workspace algorithm for computing the family of all higher-order Voronoi diagrams of P up to order K in O(sqrt(s)) in total time O( (n^2 K^6 / s) log^(1+epsilon)(K) (log s / log K)^(O(1)) ) for any fixed epsilon > 0. Previously, for Voronoi diagrams, the only known s-workspace algorithm was to find an NVD for P in expected time O((n^2/s) log s + n log s log^*s). Unlike the previous algorithm, our new method is very simple and does not rely on advanced data structures or random sampling techniques.

Cite as

Bahareh Banyassady, Matias Korman, Wolfgang Mulzer, André van Renssen, Marcel Roeloffzen, Paul Seiferth, and Yannik Stein. Improved Time-Space Trade-Offs for Computing Voronoi Diagrams. In 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 66, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{banyassady_et_al:LIPIcs.STACS.2017.9,
  author =	{Banyassady, Bahareh and Korman, Matias and Mulzer, Wolfgang and van Renssen, Andr\'{e} and Roeloffzen, Marcel and Seiferth, Paul and Stein, Yannik},
  title =	{{Improved Time-Space Trade-Offs for Computing Voronoi Diagrams}},
  booktitle =	{34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)},
  pages =	{9:1--9:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-028-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{66},
  editor =	{Vollmer, Heribert and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2017.9},
  URN =		{urn:nbn:de:0030-drops-70249},
  doi =		{10.4230/LIPIcs.STACS.2017.9},
  annote =	{Keywords: memory-constrained model, Voronoi diagram, time-space trade-off}
}
Document
Conditionally Optimal Algorithms for Generalized Büchi Games

Authors: Krishnendu Chatterjee, Wolfgang Dvorák, Monika Henzinger, and Veronika Loitzenbauer

Published in: LIPIcs, Volume 58, 41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016)


Abstract
Games on graphs provide the appropriate framework to study several central problems in computer science, such as verification and synthesis of reactive systems. One of the most basic objectives for games on graphs is the liveness (or Büchi) objective that given a target set of vertices requires that some vertex in the target set is visited infinitely often. We study generalized Büchi objectives (i.e., conjunction of liveness objectives), and implications between two generalized Büchi objectives (known as GR(1) objectives), that arise in numerous applications in computer-aided verification. We present improved algorithms and conditional super-linear lower bounds based on widely believed assumptions about the complexity of (A1) combinatorial Boolean matrix multiplication and (A2) CNF-SAT. We consider graph games with n vertices, m edges, and generalized Büchi objectives with k conjunctions. First, we present an algorithm with running time O(k*n^2), improving the previously known O(k*n*m) and O(k^2*n^2) worst-case bounds. Our algorithm is optimal for dense graphs under (A1). Second, we show that the basic algorithm for the problem is optimal for sparse graphs when the target sets have constant size under (A2). Finally, we consider GR(1) objectives, with k_1 conjunctions in the antecedent and k_2 conjunctions in the consequent, and present an O(k_1 k_2 n^{2.5})-time algorithm, improving the previously known O(k_1*k_2*n*m)-time algorithm for m > n^{1.5}.

Cite as

Krishnendu Chatterjee, Wolfgang Dvorák, Monika Henzinger, and Veronika Loitzenbauer. Conditionally Optimal Algorithms for Generalized Büchi Games. In 41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 58, pp. 25:1-25:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chatterjee_et_al:LIPIcs.MFCS.2016.25,
  author =	{Chatterjee, Krishnendu and Dvor\'{a}k, Wolfgang and Henzinger, Monika and Loitzenbauer, Veronika},
  title =	{{Conditionally Optimal Algorithms for Generalized B\"{u}chi Games}},
  booktitle =	{41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016)},
  pages =	{25:1--25:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-016-3},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{58},
  editor =	{Faliszewski, Piotr and Muscholl, Anca and Niedermeier, Rolf},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2016.25},
  URN =		{urn:nbn:de:0030-drops-64403},
  doi =		{10.4230/LIPIcs.MFCS.2016.25},
  annote =	{Keywords: generalized B\"{u}chi objective, GR(1) objective, conditional lower bounds, graph games, graph algorithms, computer-aided verification}
}
Document
Welfare Maximization with Friends-of-Friends Network Externalities

Authors: Sayan Bhattacharya, Wolfgang Dvorák, Monika Henzinger, and Martin Starnberger

Published in: LIPIcs, Volume 30, 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)


Abstract
Online social networks allow the collection of large amounts of data about the influence between users connected by a friendship-like relationship. When distributing items among agents forming a social network, this information allows us to exploit network externalities that each agent receives from his neighbors that get the same item. In this paper we consider Friends-of-Friends (2-hop) network externalities, i.e., externalities that not only depend on the neighbors that get the same item but also on neighbors of neighbors. For these externalities we study a setting where multiple different items are assigned to unit-demand agents. Specifically, we study the problem of welfare maximization under different types of externality functions. Let n be the number of agents and m be the number of items. Our contributions are the following: (1) We show that welfare maximization is APX-hard; we show that even for step functions with 2-hop (and also with 1-hop) externalities it is NP-hard to approximate social welfare better than (1-1/e). (2) On the positive side we present (i) an O(sqrt n)-approximation algorithm for general concave externality functions, (ii) an O(\log m)-approximation algorithm for linear externality functions, and (iii) an (1-1/e)\frac{1}{6}-approximation algorithm for 2-hop step function externalities. We also improve the result from [6] for 1-hop step function externalities by giving a (1-1/e)/2-approximation algorithm.

Cite as

Sayan Bhattacharya, Wolfgang Dvorák, Monika Henzinger, and Martin Starnberger. Welfare Maximization with Friends-of-Friends Network Externalities. In 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 30, pp. 90-102, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{bhattacharya_et_al:LIPIcs.STACS.2015.90,
  author =	{Bhattacharya, Sayan and Dvor\'{a}k, Wolfgang and Henzinger, Monika and Starnberger, Martin},
  title =	{{Welfare Maximization with Friends-of-Friends Network Externalities}},
  booktitle =	{32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)},
  pages =	{90--102},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-78-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{30},
  editor =	{Mayr, Ernst W. and Ollinger, Nicolas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2015.90},
  URN =		{urn:nbn:de:0030-drops-49066},
  doi =		{10.4230/LIPIcs.STACS.2015.90},
  annote =	{Keywords: network externalities, welfare maximization, approximation algorithms}
}
Document
Solovay functions and K-triviality

Authors: Laurent Bienvenu, Wolfgang Merkle, and Andre Nies

Published in: LIPIcs, Volume 9, 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)


Abstract
As part of his groundbreaking work on algorithmic randomness, Solovay demonstrated in the 1970s the remarkable fact that there are computable upper bounds of prefix-free Kolmogorov complexity $K$ that are tight on infinitely many values (up to an additive constant). Such computable upper bounds are called Solovay functions. Recent work of Bienvenu and Downey~[STACS 2009, LIPIcs 3, pp 147-158] indicates that Solovay functions are deeply connected with central concepts of algorithmic randomness such as $Omega$ numbers, K-triviality, and Martin-Loef randomness. In what follows, among other results we answer two open problems posed by Bienvenu and Downey about the definition of $K$-triviality and about the Gacs-Miller-Yu characterization of Martin-Loef randomness. The former defines a sequence A to be K-trivial if K(A|n) <=^+ K(n), the latter asserts that a sequence A is Martin-Loef random iff C(A|n) >=^+ n-K(n). So both involve the noncomputable function K. As our main results we show that in both cases K(n) can be equivalently replaced by any Solovay function, and, what is more, that among all computable functions such a replacement is possible exactly for the Solovay functions. Moreover, similar statements hold for the larger class of all right-c.e. in place of the computable functions. These full characterizations, besides having significant theoretical interest on their own, will be useful as tools when working with K-trivial and Martin-Loef random sequences.

Cite as

Laurent Bienvenu, Wolfgang Merkle, and Andre Nies. Solovay functions and K-triviality. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 452-463, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{bienvenu_et_al:LIPIcs.STACS.2011.452,
  author =	{Bienvenu, Laurent and Merkle, Wolfgang and Nies, Andre},
  title =	{{Solovay functions and K-triviality}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{452--463},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.452},
  URN =		{urn:nbn:de:0030-drops-30345},
  doi =		{10.4230/LIPIcs.STACS.2011.452},
  annote =	{Keywords: Algorithmic randomness, Kolmogorov complexity, K-triviality}
}
Document
One-shot Learning of Poisson Distributions in fast changing environments

Authors: Peter Tino

Published in: Dagstuhl Seminar Proceedings, Volume 10302, Learning paradigms in dynamic environments (2010)


Abstract
In Bioinformatics, Audic and Claverie were among the first to systematically study the influence of random fluctuations and sampling size on the reliability of digital expression profile data. For a transcript representing a small fraction of the library and a large number N of clones, the probability of observing x tags of the same gene will be well-approximated by the Poisson distribution parametrised by its mean (and variance) m>0, where the unknown parameter m signifies the number of transcripts of the given type (tag) per N clones in the cDNA library. On an abstract level, to determine whether a gene is differentially expressed or not, one has two numbers generated from two distinct Poisson distributions and based on this (extremely sparse) sample one has to decide whether the two Poisson distributions are identical or not. This can be used e.g. to determine equivalence of Poisson photon sources (up to time shift) in gravitational lensing. Each Poisson distribution is represented by a single measurement only, which is, of course, from a purely statistical standpoint very problematic. The key instrument of the Audic-Claverie approach is a distribution P over tag counts y in one library informed by the tag count x in the other library, under the null hypothesis that the tag counts are generated from the same but unknown Poisson distribution. P is obtained by Bayesian averaging (infinite mixture) of all possible Poisson distributions with mixing proportions equal to the posteriors (given x) under the flat prior over m. We ask: Given that the tag count samples from SAGE libraries are *extremely* limited, how useful actually is the Audic-Claverie methodology? We rigorously analyse the A-C statistic P that forms a backbone of the methodology and represents our knowledge of the underlying tag generating process based on one observation. We show will that the A-C statistic P and the underlying Poisson distribution of the tag counts share the same mode structure. Moreover, the K-L divergence from the true unknown Poisson distribution to the A-C statistic is minimised when the A-C statistic is conditioned on the mode of the Poisson distribution. Most importantly (and perhaps rather surprisingly), the expectation of this K-L divergence never exceeds 1/2 bit! This constitutes a rigorous quantitative argument, extending the previous empirical Monte Carlo studies, that supports the wide spread use of Audic-Claverie method, even though by their very nature, the SAGE libraries represent very sparse samples.

Cite as

Peter Tino. One-shot Learning of Poisson Distributions in fast changing environments. In Learning paradigms in dynamic environments. Dagstuhl Seminar Proceedings, Volume 10302, pp. 1-9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{tino:DagSemProc.10302.4,
  author =	{Tino, Peter},
  title =	{{One-shot Learning of Poisson Distributions in fast changing environments}},
  booktitle =	{Learning paradigms in dynamic environments},
  pages =	{1--9},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10302},
  editor =	{Barbara Hammer and Pascal Hitzler and Wolfgang Maass and Marc Toussaint},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.10302.4},
  URN =		{urn:nbn:de:0030-drops-27998},
  doi =		{10.4230/DagSemProc.10302.4},
  annote =	{Keywords: Audic-Claverie statistic, Bayesian averaging, information theory, one-shot learning, Poisson distribution}
}
Document
Perspectives of Neuro--Symbolic Integration – Extended Abstract --

Authors: Kai-Uwe Kühnberger, Helmar Gust, and Peter Geibel

Published in: Dagstuhl Seminar Proceedings, Volume 8041, Recurrent Neural Networks- Models, Capacities, and Applications (2008)


Abstract
There is an obvious tension between symbolic and subsymbolic theories, because both show complementary strengths and weaknesses in corresponding applications and underlying methodologies. The resulting gap in the foundations and the applicability of these approaches is theoretically unsatisfactory and practically undesirable. We sketch a theory that bridges this gap between symbolic and subsymbolic approaches by the introduction of a Topos-based semi-symbolic level used for coding logical first-order expressions in a homogeneous framework. This semi-symbolic level can be used for neural learning of logical first-order theories. Besides a presentation of the general idea of the framework, we sketch some challenges and important open problems for future research with respect to the presented approach and the field of neuro-symbolic integration, in general.

Cite as

Kai-Uwe Kühnberger, Helmar Gust, and Peter Geibel. Perspectives of Neuro--Symbolic Integration – Extended Abstract --. In Recurrent Neural Networks- Models, Capacities, and Applications. Dagstuhl Seminar Proceedings, Volume 8041, pp. 1-6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{kuhnberger_et_al:DagSemProc.08041.4,
  author =	{K\"{u}hnberger, Kai-Uwe and Gust, Helmar and Geibel, Peter},
  title =	{{Perspectives of Neuro--Symbolic Integration – Extended Abstract --}},
  booktitle =	{Recurrent Neural Networks- Models, Capacities, and Applications},
  pages =	{1--6},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8041},
  editor =	{Luc De Raedt and Barbara Hammer and Pascal Hitzler and Wolfgang Maass},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08041.4},
  URN =		{urn:nbn:de:0030-drops-14226},
  doi =		{10.4230/DagSemProc.08041.4},
  annote =	{Keywords: Neuro-Symbolic Integration, Topos Theory, First-Order Logic}
}
  • Refine by Author
  • 7 Mulzer, Wolfgang
  • 4 Henzinger, Monika
  • 3 Chatterjee, Krishnendu
  • 3 Dvorák, Wolfgang
  • 2 Agarwal, Pankaj K.
  • Show More...

  • Refine by Classification
  • 5 Theory of computation → Computational geometry
  • 1 Mathematics of computing → Combinatorial algorithms
  • 1 Mathematics of computing → Combinatorics
  • 1 Mathematics of computing → Paths and connectivity problems
  • 1 Software and its engineering → Formal software verification
  • Show More...

  • Refine by Keyword
  • 3 graph games
  • 2 model checking
  • 1 Algorithmic randomness
  • 1 Approximation
  • 1 Audic-Claverie statistic
  • Show More...

  • Refine by Type
  • 20 document

  • Refine by Publication Year
  • 3 2019
  • 2 2007
  • 2 2017
  • 2 2018
  • 2 2022
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail