Search Results

Documents authored by Dadush, Daniel


Document
Asymptotic Bounds on the Combinatorial Diameter of Random Polytopes

Authors: Gilles Bonnet, Daniel Dadush, Uri Grupel, Sophie Huiberts, and Galyna Livshyts

Published in: LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)


Abstract
The combinatorial diameter diam(P) of a polytope P is the maximum shortest path distance between any pair of vertices. In this paper, we provide upper and lower bounds on the combinatorial diameter of a random "spherical" polytope, which is tight to within one factor of dimension when the number of inequalities is large compared to the dimension. More precisely, for an n-dimensional polytope P defined by the intersection of m i.i.d. half-spaces whose normals are chosen uniformly from the sphere, we show that diam(P) is Ω(n m^{1/(n-1)}) and O(n² m^{1/(n-1)} + n⁵ 4ⁿ) with high probability when m ≥ 2^{Ω(n)}. For the upper bound, we first prove that the number of vertices in any fixed two dimensional projection sharply concentrates around its expectation when m is large, where we rely on the Θ(n² m^{1/(n-1)}) bound on the expectation due to Borgwardt [Math. Oper. Res., 1999]. To obtain the diameter upper bound, we stitch these "shadows paths" together over a suitable net using worst-case diameter bounds to connect vertices to the nearest shadow. For the lower bound, we first reduce to lower bounding the diameter of the dual polytope P^∘, corresponding to a random convex hull, by showing the relation diam(P) ≥ (n-1)(diam(P^∘)-2). We then prove that the shortest path between any "nearly" antipodal pair vertices of P^∘ has length Ω(m^{1/(n-1)}).

Cite as

Gilles Bonnet, Daniel Dadush, Uri Grupel, Sophie Huiberts, and Galyna Livshyts. Asymptotic Bounds on the Combinatorial Diameter of Random Polytopes. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 18:1-18:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bonnet_et_al:LIPIcs.SoCG.2022.18,
  author =	{Bonnet, Gilles and Dadush, Daniel and Grupel, Uri and Huiberts, Sophie and Livshyts, Galyna},
  title =	{{Asymptotic Bounds on the Combinatorial Diameter of Random Polytopes}},
  booktitle =	{38th International Symposium on Computational Geometry (SoCG 2022)},
  pages =	{18:1--18:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-227-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{224},
  editor =	{Goaoc, Xavier and Kerber, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.18},
  URN =		{urn:nbn:de:0030-drops-160269},
  doi =		{10.4230/LIPIcs.SoCG.2022.18},
  annote =	{Keywords: Random Polytopes, Combinatorial Diameter, Hirsch Conjecture}
}
Document
An Accelerated Newton-Dinkelbach Method and Its Application to Two Variables per Inequality Systems

Authors: Daniel Dadush, Zhuan Khye Koh, Bento Natura, and László A. Végh

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
We present an accelerated, or "look-ahead" version of the Newton-Dinkelbach method, a well-known technique for solving fractional and parametric optimization problems. This acceleration halves the Bregman divergence between the current iterate and the optimal solution within every two iterations. Using the Bregman divergence as a potential in conjunction with combinatorial arguments, we obtain strongly polynomial algorithms in three applications domains: (i) For linear fractional combinatorial optimization, we show a convergence bound of O(mlog m) iterations; the previous best bound was O(m²log m) by Wang et al. (2006). (ii) We obtain a strongly polynomial label-correcting algorithm for solving linear feasibility systems with two variables per inequality (2VPI). For a 2VPI system with n variables and m constraints, our algorithm runs in O(mn) iterations. Every iteration takes O(mn) time for general 2VPI systems, and O(m + nlog n) time for the special case of deterministic Markov Decision Processes (DMDPs). This extends and strengthens a previous result by Madani (2002) that showed a weakly polynomial bound for a variant of the Newton–Dinkelbach method for solving DMDPs. (iii) We give a simplified variant of the parametric submodular function minimization result by Goemans et al. (2017).

Cite as

Daniel Dadush, Zhuan Khye Koh, Bento Natura, and László A. Végh. An Accelerated Newton-Dinkelbach Method and Its Application to Two Variables per Inequality Systems. In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 36:1-36:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dadush_et_al:LIPIcs.ESA.2021.36,
  author =	{Dadush, Daniel and Koh, Zhuan Khye and Natura, Bento and V\'{e}gh, L\'{a}szl\'{o} A.},
  title =	{{An Accelerated Newton-Dinkelbach Method and Its Application to Two Variables per Inequality Systems}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{36:1--36:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.36},
  URN =		{urn:nbn:de:0030-drops-146172},
  doi =		{10.4230/LIPIcs.ESA.2021.36},
  annote =	{Keywords: Newton-Dinkelbach method, fractional optimization, parametric optimization, strongly polynomial algorithms, two variables per inequality systems, Markov decision processes, submodular function minimization}
}
Document
Majorizing Measures for the Optimizer

Authors: Sander Borst, Daniel Dadush, Neil Olver, and Makrand Sinha

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
The theory of majorizing measures, extensively developed by Fernique, Talagrand and many others, provides one of the most general frameworks for controlling the behavior of stochastic processes. In particular, it can be applied to derive quantitative bounds on the expected suprema and the degree of continuity of sample paths for many processes. One of the crowning achievements of the theory is Talagrand’s tight alternative characterization of the suprema of Gaussian processes in terms of majorizing measures. The proof of this theorem was difficult, and thus considerable effort was put into the task of developing both shorter and easier to understand proofs. A major reason for this difficulty was considered to be theory of majorizing measures itself, which had the reputation of being opaque and mysterious. As a consequence, most recent treatments of the theory (including by Talagrand himself) have eschewed the use of majorizing measures in favor of a purely combinatorial approach (the generic chaining) where objects based on sequences of partitions provide roughly matching upper and lower bounds on the desired expected supremum. In this paper, we return to majorizing measures as a primary object of study, and give a viewpoint that we think is natural and clarifying from an optimization perspective. As our main contribution, we give an algorithmic proof of the majorizing measures theorem based on two parts: - We make the simple (but apparently new) observation that finding the best majorizing measure can be cast as a convex program. This also allows for efficiently computing the measure using off-the-shelf methods from convex optimization. - We obtain tree-based upper and lower bound certificates by rounding, in a series of steps, the primal and dual solutions to this convex program. While duality has conceptually been part of the theory since its beginnings, as far as we are aware no explicit link to convex optimization has been previously made.

Cite as

Sander Borst, Daniel Dadush, Neil Olver, and Makrand Sinha. Majorizing Measures for the Optimizer. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 73:1-73:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{borst_et_al:LIPIcs.ITCS.2021.73,
  author =	{Borst, Sander and Dadush, Daniel and Olver, Neil and Sinha, Makrand},
  title =	{{Majorizing Measures for the Optimizer}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{73:1--73:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.73},
  URN =		{urn:nbn:de:0030-drops-136120},
  doi =		{10.4230/LIPIcs.ITCS.2021.73},
  annote =	{Keywords: Majorizing measures, Generic chaining, Gaussian processes, Convex optimization, Dimensionality Reduction}
}
Document
On the Complexity of Branching Proofs

Authors: Daniel Dadush and Samarth Tiwari

Published in: LIPIcs, Volume 169, 35th Computational Complexity Conference (CCC 2020)


Abstract
We consider the task of proving integer infeasibility of a bounded convex K in ℝⁿ using a general branching proof system. In a general branching proof, one constructs a branching tree by adding an integer disjunction 𝐚𝐱 ≤ b or 𝐚𝐱 ≥ b+1, 𝐚 ∈ ℤⁿ, b ∈ ℤ, at each node, such that the leaves of the tree correspond to empty sets (i.e., K together with the inequalities picked up from the root to leaf is empty). Recently, Beame et al (ITCS 2018), asked whether the bit size of the coefficients in a branching proof, which they named stabbing planes (SP) refutations, for the case of polytopes derived from SAT formulas, can be assumed to be polynomial in n. We resolve this question in the affirmative, by showing that any branching proof can be recompiled so that the normals of the disjunctions have coefficients of size at most (n R)^O(n²), where R ∈ ℕ is the radius of an 𝓁₁ ball containing K, while increasing the number of nodes in the branching tree by at most a factor O(n). Our recompilation techniques works by first replacing each disjunction using an iterated Diophantine approximation, introduced by Frank and Tardos (Combinatorica 1986), and proceeds by "fixing up" the leaves of the tree using judiciously added Chvátal-Gomory (CG) cuts. As our second contribution, we show that Tseitin formulas, an important class of infeasible SAT instances, have quasi-polynomial sized cutting plane (CP) refutations. This disproves a conjecture that Tseitin formulas are (exponentially) hard for CP. Our upper bound follows by recompiling the quasi-polynomial sized SP refutations for Tseitin formulas due to Beame et al, which have a special enumerative form, into a CP proof of the same length using a serialization technique of Cook et al (Discrete Appl. Math. 1987). As our final contribution, we give a simple family of polytopes in [0,1]ⁿ requiring exponential sized branching proofs.

Cite as

Daniel Dadush and Samarth Tiwari. On the Complexity of Branching Proofs. In 35th Computational Complexity Conference (CCC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 169, pp. 34:1-34:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dadush_et_al:LIPIcs.CCC.2020.34,
  author =	{Dadush, Daniel and Tiwari, Samarth},
  title =	{{On the Complexity of Branching Proofs}},
  booktitle =	{35th Computational Complexity Conference (CCC 2020)},
  pages =	{34:1--34:35},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-156-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{169},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2020.34},
  URN =		{urn:nbn:de:0030-drops-125863},
  doi =		{10.4230/LIPIcs.CCC.2020.34},
  annote =	{Keywords: Branching Proofs, Cutting Planes, Diophantine Approximation, Integer Programming, Stabbing Planes, Tseitin Formulas}
}
Document
Lattice-based Locality Sensitive Hashing is Optimal

Authors: Karthekeyan Chandrasekaran, Daniel Dadush, Venkata Gandikota, and Elena Grigorescu

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
Locality sensitive hashing (LSH) was introduced by Indyk and Motwani (STOC'98) to give the first sublinear time algorithm for the c-approximate nearest neighbor (ANN) problem using only polynomial space. At a high level, an LSH family hashes "nearby" points to the same bucket and "far away" points to different buckets. The quality of measure of an LSH family is its LSH exponent, which helps determine both query time and space usage. In a seminal work, Andoni and Indyk (FOCS '06) constructed an LSH family based on random ball partitionings of space that achieves an LSH exponent of 1/c^2 for the l_2 norm, which was later shown to be optimal by Motwani, Naor and Panigrahy (SIDMA '07) and O'Donnell, Wu and Zhou (TOCT '14). Although optimal in the LSH exponent, the ball partitioning approach is computationally expensive. So, in the same work, Andoni and Indyk proposed a simpler and more practical hashing scheme based on Euclidean lattices and provided computational results using the 24-dimensional Leech lattice. However, no theoretical analysis of the scheme was given, thus leaving open the question of finding the exponent of lattice based LSH. In this work, we resolve this question by showing the existence of lattices achieving the optimal LSH exponent of 1/c^2 using techniques from the geometry of numbers. At a more conceptual level, our results show that optimal LSH space partitions can have periodic structure. Understanding the extent to which additional structure can be imposed on these partitions, e.g. to yield low space and query complexity, remains an important open problem.

Cite as

Karthekeyan Chandrasekaran, Daniel Dadush, Venkata Gandikota, and Elena Grigorescu. Lattice-based Locality Sensitive Hashing is Optimal. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 42:1-42:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chandrasekaran_et_al:LIPIcs.ITCS.2018.42,
  author =	{Chandrasekaran, Karthekeyan and Dadush, Daniel and Gandikota, Venkata and Grigorescu, Elena},
  title =	{{Lattice-based Locality Sensitive Hashing is Optimal}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{42:1--42:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.42},
  URN =		{urn:nbn:de:0030-drops-83470},
  doi =		{10.4230/LIPIcs.ITCS.2018.42},
  annote =	{Keywords: Locality Sensitive Hashing, Approximate Nearest Neighbor Search, Random Lattices}
}
Document
Towards a Constructive Version of Banaszczyk's Vector Balancing Theorem

Authors: Daniel Dadush, Shashwat Garg, Shachar Lovett, and Aleksandar Nikolov

Published in: LIPIcs, Volume 60, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)


Abstract
An important theorem of Banaszczyk (Random Structures & Algorithms 1998) states that for any sequence of vectors of l_2 norm at most 1/5 and any convex body K of Gaussian measure 1/2 in R^n, there exists a signed combination of these vectors which lands inside K. A major open problem is to devise a constructive version of Banaszczyk's vector balancing theorem, i.e. to find an efficient algorithm which constructs the signed combination. We make progress towards this goal along several fronts. As our first contribution, we show an equivalence between Banaszczyk's theorem and the existence of O(1)-subgaussian distributions over signed combinations. For the case of symmetric convex bodies, our equivalence implies the existence of a universal signing algorithm (i.e. independent of the body), which simply samples from the subgaussian sign distribution and checks to see if the associated combination lands inside the body. For asymmetric convex bodies, we provide a novel recentering procedure, which allows us to reduce to the case where the body is symmetric. As our second main contribution, we show that the above framework can be efficiently implemented when the vectors have length O(1/sqrt{log n}), recovering Banaszczyk's results under this stronger assumption. More precisely, we use random walk techniques to produce the required O(1)-subgaussian signing distributions when the vectors have length O(1/sqrt{log n}), and use a stochastic gradient ascent method to implement the recentering procedure for asymmetric bodies.

Cite as

Daniel Dadush, Shashwat Garg, Shachar Lovett, and Aleksandar Nikolov. Towards a Constructive Version of Banaszczyk's Vector Balancing Theorem. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 60, pp. 28:1-28:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{dadush_et_al:LIPIcs.APPROX-RANDOM.2016.28,
  author =	{Dadush, Daniel and Garg, Shashwat and Lovett, Shachar and Nikolov, Aleksandar},
  title =	{{Towards a Constructive Version of Banaszczyk's Vector Balancing Theorem}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)},
  pages =	{28:1--28:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-018-7},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{60},
  editor =	{Jansen, Klaus and Mathieu, Claire and Rolim, Jos\'{e} D. P. and Umans, Chris},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2016.28},
  URN =		{urn:nbn:de:0030-drops-66512},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2016.28},
  annote =	{Keywords: Discrepancy, Vector Balancing, Convex Geometry}
}
Document
On the Lattice Distortion Problem

Authors: Huck Bennett, Daniel Dadush, and Noah Stephens-Davidowitz

Published in: LIPIcs, Volume 57, 24th Annual European Symposium on Algorithms (ESA 2016)


Abstract
We introduce and study the Lattice Distortion Problem (LDP). LDP asks how "similar" two lattices are. I.e., what is the minimal distortion of a linear bijection between the two lattices? LDP generalizes the Lattice Isomorphism Problem (the lattice analogue of Graph Isomorphism), which simply asks whether the minimal distortion is one. As our first contribution, we show that the distortion between any two lattices is approximated up to a n^{O(log(n))} factor by a simple function of their successive minima. Our methods are constructive, allowing us to compute low-distortion mappings that are within a 2^{O(n*log(log(n))/log(n))} factor of optimal in polynomial time and within a n^{O(log(n))} factor of optimal in singly exponential time. Our algorithms rely on a notion of basis reduction introduced by Seysen (Combinatorica 1993), which we show is intimately related to lattice distortion. Lastly, we show that LDP is NP-hard to approximate to within any constant factor (under randomized reductions), by a reduction from the Shortest Vector Problem.

Cite as

Huck Bennett, Daniel Dadush, and Noah Stephens-Davidowitz. On the Lattice Distortion Problem. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 9:1-9:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bennett_et_al:LIPIcs.ESA.2016.9,
  author =	{Bennett, Huck and Dadush, Daniel and Stephens-Davidowitz, Noah},
  title =	{{On the Lattice Distortion Problem}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{9:1--9:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.9},
  URN =		{urn:nbn:de:0030-drops-63519},
  doi =		{10.4230/LIPIcs.ESA.2016.9},
  annote =	{Keywords: lattices, lattice distortion, lattice isomoprhism, geometry of numbers, basis reduction}
}
Document
On the Shadow Simplex Method for Curved Polyhedra

Authors: Daniel Dadush and Nicolai Hähnle

Published in: LIPIcs, Volume 34, 31st International Symposium on Computational Geometry (SoCG 2015)


Abstract
We study the simplex method over polyhedra satisfying certain "discrete curvature" lower bounds, which enforce that the boundary always meets vertices at sharp angles. Motivated by linear programs with totally unimodular constraint matrices, recent results of Bonifas et al. (SOCG 2012), Brunsch and Röglin (ICALP 2013), and Eisenbrand and Vempala (2014) have improved our understanding of such polyhedra. We develop a new type of dual analysis of the shadow simplex method which provides a clean and powerful tool for improving all previously mentioned results. Our methods are inspired by the recent work of Bonifas and the first named author, who analyzed a remarkably similar process as part of an algorithm for the Closest Vector Problem with Preprocessing. For our first result, we obtain a constructive diameter bound of O((n^2 / delta) ln (n / delta)) for n-dimensional polyhedra with curvature parameter delta in (0, 1]. For the class of polyhedra arising from totally unimodular constraint matrices, this implies a bound of O(n^3 ln n). For linear optimization, given an initial feasible vertex, we show that an optimal vertex can be found using an expected O((n^3 / delta) ln (n / delta)) simplex pivots, each requiring O(mn) time to compute. An initial feasible solution can be found using O((mn^3 / delta) ln (n / delta)) pivot steps.

Cite as

Daniel Dadush and Nicolai Hähnle. On the Shadow Simplex Method for Curved Polyhedra. In 31st International Symposium on Computational Geometry (SoCG 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 34, pp. 345-359, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{dadush_et_al:LIPIcs.SOCG.2015.345,
  author =	{Dadush, Daniel and H\"{a}hnle, Nicolai},
  title =	{{On the Shadow Simplex Method for Curved Polyhedra}},
  booktitle =	{31st International Symposium on Computational Geometry (SoCG 2015)},
  pages =	{345--359},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-83-5},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{34},
  editor =	{Arge, Lars and Pach, J\'{a}nos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SOCG.2015.345},
  URN =		{urn:nbn:de:0030-drops-51142},
  doi =		{10.4230/LIPIcs.SOCG.2015.345},
  annote =	{Keywords: Optimization, Linear Programming, Simplex Method, Diameter of Polyhedra}
}
Document
Faster Deterministic Volume Estimation in the Oracle Model via Thin Lattice Coverings

Authors: Daniel Dadush

Published in: LIPIcs, Volume 34, 31st International Symposium on Computational Geometry (SoCG 2015)


Abstract
We give a 2^{O(n)}(1+1/eps)^n time and poly(n)-space deterministic algorithm for computing a (1+eps)^n approximation to the volume of a general convex body K, which comes close to matching the (1+c/eps)^{n/2} lower bound for volume estimation in the oracle model by Barany and Furedi (STOC 1986, Proc. Amer. Math. Soc. 1988). This improves on the previous results of Dadush and Vempala (Proc. Nat'l Acad. Sci. 2013), which gave the above result only for symmetric bodies and achieved a dependence of 2^{O(n)}(1+log^{5/2}(1/eps)/eps^3)^n. For our methods, we reduce the problem of volume estimation in K to counting lattice points in K subseteq R^n (via enumeration) for a specially constructed lattice L: a so-called thin covering of space with respect to K (more precisely, for which L + K = R^n and vol_n(K)/det(L) = 2^{O(n)}). The trade off between time and approximation ratio is achieved by scaling down the lattice. As our main technical contribution, we give the first deterministic 2^{O(n)}-time and poly(n)-space construction of thin covering lattices for general convex bodies. This improves on a recent construction of Alon et al (STOC 2013) which requires exponential space and only works for symmetric bodies. For our construction, we combine the use of the M-ellipsoid from convex geometry (Milman, C.R. Math. Acad. Sci. Paris 1986) together with lattice sparsification and densification techniques (Dadush and Kun, SODA 2013; Rogers, J. London Math. Soc. 1950).

Cite as

Daniel Dadush. Faster Deterministic Volume Estimation in the Oracle Model via Thin Lattice Coverings. In 31st International Symposium on Computational Geometry (SoCG 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 34, pp. 704-718, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{dadush:LIPIcs.SOCG.2015.704,
  author =	{Dadush, Daniel},
  title =	{{Faster Deterministic Volume Estimation in the Oracle Model via Thin Lattice Coverings}},
  booktitle =	{31st International Symposium on Computational Geometry (SoCG 2015)},
  pages =	{704--718},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-83-5},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{34},
  editor =	{Arge, Lars and Pach, J\'{a}nos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SOCG.2015.704},
  URN =		{urn:nbn:de:0030-drops-51168},
  doi =		{10.4230/LIPIcs.SOCG.2015.704},
  annote =	{Keywords: Deterministic Volume Estimation, Convex Geometry, Lattice Coverings of Space, Lattice Point Enumeration}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail