73 Search Results for "Vempala, Santosh"


Volume

LIPIcs, Volume 81

Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)

APPROX/RANDOM 2017, August 16-18, 2017, Berkeley, CA, USA

Editors: Klaus Jansen, José D. P. Rolim, David P. Williamson, and Santosh S. Vempala

Document
Efficient Algorithms for Certifying Lower Bounds on the Discrepancy of Random Matrices

Authors: Prayaag Venkat

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
In this paper, we initiate the study of the algorithmic problem of certifying lower bounds on the discrepancy of random matrices: given an input matrix A ∈ ℝ^{m × n}, output a value that is a lower bound on disc(A) = min_{x ∈ {± 1}ⁿ} ‖Ax‖_∞ for every A, but is close to the typical value of disc(A) with high probability over the choice of a random A. This problem is important because of its connections to conjecturally-hard average-case problems such as negatively-spiked PCA [Afonso S. Bandeira et al., 2020], the number-balancing problem [Gamarnik and Kızıldağ, 2021] and refuting random constraint satisfaction problems [Prasad Raghavendra et al., 2017]. We give the first polynomial-time algorithms with non-trivial guarantees for two main settings. First, when the entries of A are i.i.d. standard Gaussians, it is known that disc(A) = Θ (√n2^{-n/m}) with high probability [Karthekeyan Chandrasekaran and Santosh S. Vempala, 2014; Aubin et al., 2019; Paxton Turner et al., 2020] and that super-constant levels of the Sum-of-Squares SDP hierarchy fail to certify anything better than disc(A) ≥ 0 when m < n - o(n) [Mrinalkanti Ghosh et al., 2020]. In contrast, our algorithm certifies that disc(A) ≥ exp(-O(n²/m)) with high probability. As an application, this formally refutes a conjecture of Bandeira, Kunisky, and Wein [Afonso S. Bandeira et al., 2020] on the computational hardness of the detection problem in the negatively-spiked Wishart model. Second, we consider the integer partitioning problem: given n uniformly random b-bit integers a₁, …, a_n, certify the non-existence of a perfect partition, i.e. certify that disc(A) ≥ 1 for A = (a₁, …, a_n). Under the scaling b = α n, it is known that the probability of the existence of a perfect partition undergoes a phase transition from 1 to 0 at α = 1 [Christian Borgs et al., 2001]; our algorithm certifies the non-existence of perfect partitions for some α = O(n). We also give efficient non-deterministic algorithms with significantly improved guarantees, raising the possibility that the landscape of these certification problems closely resembles that of e.g. the problem of refuting random 3SAT formulas in the unsatisfiable regime. Our algorithms involve a reduction to the Shortest Vector Problem and employ the Lenstra-Lenstra-Lovász algorithm.

Cite as

Prayaag Venkat. Efficient Algorithms for Certifying Lower Bounds on the Discrepancy of Random Matrices. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 98:1-98:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{venkat:LIPIcs.ITCS.2023.98,
  author =	{Venkat, Prayaag},
  title =	{{Efficient Algorithms for Certifying Lower Bounds on the Discrepancy of Random Matrices}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{98:1--98:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.98},
  URN =		{urn:nbn:de:0030-drops-176015},
  doi =		{10.4230/LIPIcs.ITCS.2023.98},
  annote =	{Keywords: Average-case discrepancy theory, lattices, shortest vector problem}
}
Document
RANDOM
A Unified Approach to Discrepancy Minimization

Authors: Nikhil Bansal, Aditi Laddha, and Santosh Vempala

Published in: LIPIcs, Volume 245, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)


Abstract
We study a unified approach and algorithm for constructive discrepancy minimization based on a stochastic process. By varying the parameters of the process, one can recover various state-of-the-art results. We demonstrate the flexibility of the method by deriving a discrepancy bound for smoothed instances, which interpolates between known bounds for worst-case and random instances.

Cite as

Nikhil Bansal, Aditi Laddha, and Santosh Vempala. A Unified Approach to Discrepancy Minimization. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 245, pp. 1:1-1:22, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bansal_et_al:LIPIcs.APPROX/RANDOM.2022.1,
  author =	{Bansal, Nikhil and Laddha, Aditi and Vempala, Santosh},
  title =	{{A Unified Approach to Discrepancy Minimization}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)},
  pages =	{1:1--1:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-249-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{245},
  editor =	{Chakrabarti, Amit and Swamy, Chaitanya},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2022.1},
  URN =		{urn:nbn:de:0030-drops-171238},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2022.1},
  annote =	{Keywords: Discrepancy theory, smoothed analysis}
}
Document
Invited Talk
The Manifold Joys of Sampling (Invited Talk)

Authors: Yin Tat Lee and Santosh S. Vempala

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We survey recent progress and many open questions in the field of sampling high-dimensional distributions, with specific focus on sampling with non-Euclidean metrics.

Cite as

Yin Tat Lee and Santosh S. Vempala. The Manifold Joys of Sampling (Invited Talk). In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 4:1-4:20, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{lee_et_al:LIPIcs.ICALP.2022.4,
  author =	{Lee, Yin Tat and Vempala, Santosh S.},
  title =	{{The Manifold Joys of Sampling}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{4:1--4:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.4},
  URN =		{urn:nbn:de:0030-drops-163459},
  doi =		{10.4230/LIPIcs.ICALP.2022.4},
  annote =	{Keywords: Sampling, Diffusion, Optimization, High Dimension}
}
Document
Domain Sparsification of Discrete Distributions Using Entropic Independence

Authors: Nima Anari, Michał Dereziński, Thuy-Duong Vuong, and Elizabeth Yang

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
We present a framework for speeding up the time it takes to sample from discrete distributions μ defined over subsets of size k of a ground set of n elements, in the regime where k is much smaller than n. We show that if one has access to estimates of marginals P_{S∼ μ} {i ∈ S}, then the task of sampling from μ can be reduced to sampling from related distributions ν supported on size k subsets of a ground set of only n^{1-α}⋅ poly(k) elements. Here, 1/α ∈ [1, k] is the parameter of entropic independence for μ. Further, our algorithm only requires sparsified distributions ν that are obtained by applying a sparse (mostly 0) external field to μ, an operation that for many distributions μ of interest, retains algorithmic tractability of sampling from ν. This phenomenon, which we dub domain sparsification, allows us to pay a one-time cost of estimating the marginals of μ, and in return reduce the amortized cost needed to produce many samples from the distribution μ, as is often needed in upstream tasks such as counting and inference. For a wide range of distributions where α = Ω(1), our result reduces the domain size, and as a corollary, the cost-per-sample, by a poly(n) factor. Examples include monomers in a monomer-dimer system, non-symmetric determinantal point processes, and partition-constrained Strongly Rayleigh measures. Our work significantly extends the reach of prior work of Anari and Dereziński who obtained domain sparsification for distributions with a log-concave generating polynomial (corresponding to α = 1). As a corollary of our new analysis techniques, we also obtain a less stringent requirement on the accuracy of marginal estimates even for the case of log-concave polynomials; roughly speaking, we show that constant-factor approximation is enough for domain sparsification, improving over O(1/k) relative error established in prior work.

Cite as

Nima Anari, Michał Dereziński, Thuy-Duong Vuong, and Elizabeth Yang. Domain Sparsification of Discrete Distributions Using Entropic Independence. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 5:1-5:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{anari_et_al:LIPIcs.ITCS.2022.5,
  author =	{Anari, Nima and Derezi\'{n}ski, Micha{\l} and Vuong, Thuy-Duong and Yang, Elizabeth},
  title =	{{Domain Sparsification of Discrete Distributions Using Entropic Independence}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{5:1--5:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.5},
  URN =		{urn:nbn:de:0030-drops-156013},
  doi =		{10.4230/LIPIcs.ITCS.2022.5},
  annote =	{Keywords: Domain Sparsification, Markov Chains, Sampling, Entropic Independence}
}
Document
Convergence of Gibbs Sampling: Coordinate Hit-And-Run Mixes Fast

Authors: Aditi Laddha and Santosh S. Vempala

Published in: LIPIcs, Volume 189, 37th International Symposium on Computational Geometry (SoCG 2021)


Abstract
The Gibbs Sampler is a general method for sampling high-dimensional distributions, dating back to 1971. In each step of the Gibbs Sampler, we pick a random coordinate and re-sample that coordinate from the distribution induced by fixing all the other coordinates. While it has become widely used over the past half-century, guarantees of efficient convergence have been elusive. We show that for a convex body K in ℝⁿ with diameter D, the mixing time of the Coordinate Hit-and-Run (CHAR) algorithm on K is polynomial in n and D. We also give a lower bound on the mixing rate of CHAR, showing that it is strictly worse than hit-and-run and the ball walk in the worst case.

Cite as

Aditi Laddha and Santosh S. Vempala. Convergence of Gibbs Sampling: Coordinate Hit-And-Run Mixes Fast. In 37th International Symposium on Computational Geometry (SoCG 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 189, pp. 51:1-51:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{laddha_et_al:LIPIcs.SoCG.2021.51,
  author =	{Laddha, Aditi and Vempala, Santosh S.},
  title =	{{Convergence of Gibbs Sampling: Coordinate Hit-And-Run Mixes Fast}},
  booktitle =	{37th International Symposium on Computational Geometry (SoCG 2021)},
  pages =	{51:1--51:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-184-9},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{189},
  editor =	{Buchin, Kevin and Colin de Verdi\`{e}re, \'{E}ric},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2021.51},
  URN =		{urn:nbn:de:0030-drops-138503},
  doi =		{10.4230/LIPIcs.SoCG.2021.51},
  annote =	{Keywords: Gibbs Sampler, Coordinate Hit and run, Mixing time of Markov Chain}
}
Document
Sampling Arborescences in Parallel

Authors: Nima Anari, Nathan Hu, Amin Saberi, and Aaron Schild

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
We study the problem of sampling a uniformly random directed rooted spanning tree, also known as an arborescence, from a possibly weighted directed graph. Classically, this problem has long been known to be polynomial-time solvable; the exact number of arborescences can be computed by a determinant [Tutte, 1948], and sampling can be reduced to counting [Jerrum et al., 1986; Jerrum and Sinclair, 1996]. However, the classic reduction from sampling to counting seems to be inherently sequential. This raises the question of designing efficient parallel algorithms for sampling. We show that sampling arborescences can be done in RNC. For several well-studied combinatorial structures, counting can be reduced to the computation of a determinant, which is known to be in NC [Csanky, 1975]. These include arborescences, planar graph perfect matchings, Eulerian tours in digraphs, and determinantal point processes. However, not much is known about efficient parallel sampling of these structures. Our work is a step towards resolving this mystery.

Cite as

Nima Anari, Nathan Hu, Amin Saberi, and Aaron Schild. Sampling Arborescences in Parallel. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 83:1-83:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{anari_et_al:LIPIcs.ITCS.2021.83,
  author =	{Anari, Nima and Hu, Nathan and Saberi, Amin and Schild, Aaron},
  title =	{{Sampling Arborescences in Parallel}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{83:1--83:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.83},
  URN =		{urn:nbn:de:0030-drops-136225},
  doi =		{10.4230/LIPIcs.ITCS.2021.83},
  annote =	{Keywords: parallel algorithms, arborescences, spanning trees, random sampling}
}
Document
How to Find a Point in the Convex Hull Privately

Authors: Haim Kaplan, Micha Sharir, and Uri Stemmer

Published in: LIPIcs, Volume 164, 36th International Symposium on Computational Geometry (SoCG 2020)


Abstract
We study the question of how to compute a point in the convex hull of an input set S of n points in ℝ^d in a differentially private manner. This question, which is trivial without privacy requirements, turns out to be quite deep when imposing differential privacy. In particular, it is known that the input points must reside on a fixed finite subset G ⊆ ℝ^d, and furthermore, the size of S must grow with the size of G. Previous works [Amos Beimel et al., 2010; Amos Beimel et al., 2019; Amos Beimel et al., 2013; Mark Bun et al., 2018; Mark Bun et al., 2015; Haim Kaplan et al., 2019] focused on understanding how n needs to grow with |G|, and showed that n=O(d^2.5 ⋅ 8^(log^*|G|)) suffices (so n does not have to grow significantly with |G|). However, the available constructions exhibit running time at least |G|^d², where typically |G|=X^d for some (large) discretization parameter X, so the running time is in fact Ω(X^d³). In this paper we give a differentially private algorithm that runs in O(n^d) time, assuming that n=Ω(d⁴ log X). To get this result we study and exploit some structural properties of the Tukey levels (the regions D_{≥ k} consisting of points whose Tukey depth is at least k, for k=0,1,…). In particular, we derive lower bounds on their volumes for point sets S in general position, and develop a rather subtle mechanism for handling point sets S in degenerate position (where the deep Tukey regions have zero volume). A naive approach to the construction of the Tukey regions requires n^O(d²) time. To reduce the cost to O(n^d), we use an approximation scheme for estimating the volumes of the Tukey regions (within their affine spans in case of degeneracy), and for sampling a point from such a region, a scheme that is based on the volume estimation framework of Lovász and Vempala [László Lovász and Santosh S. Vempala, 2006] and of Cousins and Vempala [Ben Cousins and Santosh S. Vempala, 2018]. Making this framework differentially private raises a set of technical challenges that we address.

Cite as

Haim Kaplan, Micha Sharir, and Uri Stemmer. How to Find a Point in the Convex Hull Privately. In 36th International Symposium on Computational Geometry (SoCG 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 164, pp. 52:1-52:15, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:LIPIcs.SoCG.2020.52,
  author =	{Kaplan, Haim and Sharir, Micha and Stemmer, Uri},
  title =	{{How to Find a Point in the Convex Hull Privately}},
  booktitle =	{36th International Symposium on Computational Geometry (SoCG 2020)},
  pages =	{52:1--52:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-143-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{164},
  editor =	{Cabello, Sergio and Chen, Danny Z.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2020.52},
  URN =		{urn:nbn:de:0030-drops-122107},
  doi =		{10.4230/LIPIcs.SoCG.2020.52},
  annote =	{Keywords: Differential privacy, Tukey depth, Convex hull}
}
Document
RANDOM
Optimal Convergence Rate of Hamiltonian Monte Carlo for Strongly Logconcave Distributions

Authors: Zongchen Chen and Santosh S. Vempala

Published in: LIPIcs, Volume 145, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)


Abstract
We study Hamiltonian Monte Carlo (HMC) for sampling from a strongly logconcave density proportional to e^{-f} where f:R^d -> R is mu-strongly convex and L-smooth (the condition number is kappa = L/mu). We show that the relaxation time (inverse of the spectral gap) of ideal HMC is O(kappa), improving on the previous best bound of O(kappa^{1.5}); we complement this with an example where the relaxation time is Omega(kappa). When implemented using a nearly optimal ODE solver, HMC returns an epsilon-approximate point in 2-Wasserstein distance using O~((kappa d)^{0.5} epsilon^{-1}) gradient evaluations per step and O~((kappa d)^{1.5}epsilon^{-1}) total time.

Cite as

Zongchen Chen and Santosh S. Vempala. Optimal Convergence Rate of Hamiltonian Monte Carlo for Strongly Logconcave Distributions. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 64:1-64:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.APPROX-RANDOM.2019.64,
  author =	{Chen, Zongchen and Vempala, Santosh S.},
  title =	{{Optimal Convergence Rate of Hamiltonian Monte Carlo for Strongly Logconcave Distributions}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)},
  pages =	{64:1--64:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-125-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{145},
  editor =	{Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.64},
  URN =		{urn:nbn:de:0030-drops-112790},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2019.64},
  annote =	{Keywords: logconcave distribution, sampling, Hamiltonian Monte Carlo, spectral gap, strong convexity}
}
Document
On the Stretch Factor of Polygonal Chains

Authors: Ke Chen, Adrian Dumitrescu, Wolfgang Mulzer, and Csaba D. Tóth

Published in: LIPIcs, Volume 138, 44th International Symposium on Mathematical Foundations of Computer Science (MFCS 2019)


Abstract
Let P=(p_1, p_2, ..., p_n) be a polygonal chain. The stretch factor of P is the ratio between the total length of P and the distance of its endpoints, sum_{i = 1}^{n-1} |p_i p_{i+1}|/|p_1 p_n|. For a parameter c >= 1, we call P a c-chain if |p_ip_j|+|p_jp_k| <= c|p_ip_k|, for every triple (i,j,k), 1 <= i<j<k <= n. The stretch factor is a global property: it measures how close P is to a straight line, and it involves all the vertices of P; being a c-chain, on the other hand, is a fingerprint-property: it only depends on subsets of O(1) vertices of the chain. We investigate how the c-chain property influences the stretch factor in the plane: (i) we show that for every epsilon > 0, there is a noncrossing c-chain that has stretch factor Omega(n^{1/2-epsilon}), for sufficiently large constant c=c(epsilon); (ii) on the other hand, the stretch factor of a c-chain P is O(n^{1/2}), for every constant c >= 1, regardless of whether P is crossing or noncrossing; and (iii) we give a randomized algorithm that can determine, for a polygonal chain P in R^2 with n vertices, the minimum c >= 1 for which P is a c-chain in O(n^{2.5} polylog n) expected time and O(n log n) space.

Cite as

Ke Chen, Adrian Dumitrescu, Wolfgang Mulzer, and Csaba D. Tóth. On the Stretch Factor of Polygonal Chains. In 44th International Symposium on Mathematical Foundations of Computer Science (MFCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 138, pp. 56:1-56:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.MFCS.2019.56,
  author =	{Chen, Ke and Dumitrescu, Adrian and Mulzer, Wolfgang and T\'{o}th, Csaba D.},
  title =	{{On the Stretch Factor of Polygonal Chains}},
  booktitle =	{44th International Symposium on Mathematical Foundations of Computer Science (MFCS 2019)},
  pages =	{56:1--56:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-117-7},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{138},
  editor =	{Rossmanith, Peter and Heggernes, Pinar and Katoen, Joost-Pieter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2019.56},
  URN =		{urn:nbn:de:0030-drops-110005},
  doi =		{10.4230/LIPIcs.MFCS.2019.56},
  annote =	{Keywords: polygonal chain, vertex dilation, Koch curve, recursive construction}
}
Document
Sherali - Adams Strikes Back

Authors: Ryan O'Donnell and Tselil Schramm

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
Let G be any n-vertex graph whose random walk matrix has its nontrivial eigenvalues bounded in magnitude by 1/sqrt{Delta} (for example, a random graph G of average degree Theta(Delta) typically has this property). We show that the exp(c (log n)/(log Delta))-round Sherali - Adams linear programming hierarchy certifies that the maximum cut in such a G is at most 50.1 % (in fact, at most 1/2 + 2^{-Omega(c)}). For example, in random graphs with n^{1.01} edges, O(1) rounds suffice; in random graphs with n * polylog(n) edges, n^{O(1/log log n)} = n^{o(1)} rounds suffice. Our results stand in contrast to the conventional beliefs that linear programming hierarchies perform poorly for max-cut and other CSPs, and that eigenvalue/SDP methods are needed for effective refutation. Indeed, our results imply that constant-round Sherali - Adams can strongly refute random Boolean k-CSP instances with n^{ceil[k/2] + delta} constraints; previously this had only been done with spectral algorithms or the SOS SDP hierarchy.

Cite as

Ryan O'Donnell and Tselil Schramm. Sherali - Adams Strikes Back. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 8:1-8:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{odonnell_et_al:LIPIcs.CCC.2019.8,
  author =	{O'Donnell, Ryan and Schramm, Tselil},
  title =	{{Sherali - Adams Strikes Back}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{8:1--8:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.8},
  URN =		{urn:nbn:de:0030-drops-108309},
  doi =		{10.4230/LIPIcs.CCC.2019.8},
  annote =	{Keywords: Linear programming, Sherali, Adams, max-cut, graph eigenvalues, Sum-of-Squares}
}
Document
Random Projection in the Brain and Computation with Assemblies of Neurons

Authors: Christos H. Papadimitriou and Santosh S. Vempala

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
It has been recently shown via simulations [Dasgupta et al., 2017] that random projection followed by a cap operation (setting to one the k largest elements of a vector and everything else to zero), a map believed to be an important part of the insect olfactory system, has strong locality sensitivity properties. We calculate the asymptotic law whereby the overlap in the input vectors is conserved, verifying mathematically this empirical finding. We then focus on the far more complex homologous operation in the mammalian brain, the creation through successive projections and caps of an assembly (roughly, a set of excitatory neurons representing a memory or concept) in the presence of recurrent synapses and plasticity. After providing a careful definition of assemblies, we prove that the operation of assembly projection converges with high probability, over the randomness of synaptic connectivity, even if plasticity is relatively small (previous proofs relied on high plasticity). We also show that assembly projection has itself some locality preservation properties. Finally, we propose a large repertoire of assembly operations, including associate, merge, reciprocal project, and append, each of them both biologically plausible and consistent with what we know from experiments, and show that this computational system is capable of simulating, again with high probability, arbitrary computation in a quite natural way. We hope that this novel way of looking at brain computation, open-ended and based on reasonably mainstream ideas in neuroscience, may prove an attractive entry point for computer scientists to work on understanding the brain.

Cite as

Christos H. Papadimitriou and Santosh S. Vempala. Random Projection in the Brain and Computation with Assemblies of Neurons. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 57:1-57:19, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{papadimitriou_et_al:LIPIcs.ITCS.2019.57,
  author =	{Papadimitriou, Christos H. and Vempala, Santosh S.},
  title =	{{Random Projection in the Brain and Computation with Assemblies of Neurons}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{57:1--57:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.57},
  URN =		{urn:nbn:de:0030-drops-101506},
  doi =		{10.4230/LIPIcs.ITCS.2019.57},
  annote =	{Keywords: Brain computation, random projection, assemblies, plasticity, memory, association}
}
Document
Invited Paper
Continuous Algorithms (Invited Paper)

Authors: Santosh Vempala

Published in: LIPIcs, Volume 122, 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018)


Abstract
While the design of algorithms is traditionally a discrete endeavour, in recent years many advances have come from continuous perspectives. Typically, a continuous process, deterministic or randomized, is designed and shown to have desirable properties, such as approaching an optimal solution or a target distribution, and an algorithm is derived from this by appropriate discretization. We will discuss examples of this for optimization (gradient descent, interior-point method) and sampling (Brownian motion, Hamiltonian Monte Carlo), with applications to learning. In some interesting and rather general settings, the current fastest methods have been obtained via this approach.

Cite as

Santosh Vempala. Continuous Algorithms (Invited Paper). In 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 122, p. 4:1, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{vempala:LIPIcs.FSTTCS.2018.4,
  author =	{Vempala, Santosh},
  title =	{{Continuous Algorithms}},
  booktitle =	{38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018)},
  pages =	{4:1--4:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-093-4},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{122},
  editor =	{Ganguly, Sumit and Pandya, Paritosh},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2018.4},
  URN =		{urn:nbn:de:0030-drops-99037},
  doi =		{10.4230/LIPIcs.FSTTCS.2018.4},
  annote =	{Keywords: Algorithms}
}
Document
Long Term Memory and the Densest K-Subgraph Problem

Authors: Robert Legenstein, Wolfgang Maass, Christos H. Papadimitriou, and Santosh S. Vempala

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
In a recent experiment, a cell in the human medial temporal lobe (MTL) encoding one sensory stimulus starts to also respond to a second stimulus following a combined experience associating the two. We develop a theoretical model predicting that an assembly of cells with exceptionally high synaptic intraconnectivity can emerge, in response to a particular sensory experience, to encode and abstract that experience. We also show that two such assemblies are modified to increase their intersection after a sensory event that associates the two corresponding stimuli. The main technical tools employed are random graph theory, and Bernoulli approximations. Assembly creation must overcome a computational challenge akin to the Densest K-Subgraph problem, namely selecting, from a large population of randomly and sparsely interconnected cells, a subset with exceptionally high density of interconnections. We identify three mechanisms that help achieve this feat in our model: (1) a simple two-stage randomized algorithm, and (2) the "triangle completion bias" in synaptic connectivity and a "birthday paradox", while (3) the strength of these connections is enhanced through Hebbian plasticity.

Cite as

Robert Legenstein, Wolfgang Maass, Christos H. Papadimitriou, and Santosh S. Vempala. Long Term Memory and the Densest K-Subgraph Problem. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 57:1-57:15, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{legenstein_et_al:LIPIcs.ITCS.2018.57,
  author =	{Legenstein, Robert and Maass, Wolfgang and Papadimitriou, Christos H. and Vempala, Santosh S.},
  title =	{{Long Term Memory and the Densest K-Subgraph Problem}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{57:1--57:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.57},
  URN =		{urn:nbn:de:0030-drops-83593},
  doi =		{10.4230/LIPIcs.ITCS.2018.57},
  annote =	{Keywords: Brain computation, long term memory, assemblies, association}
}
Document
Towards Human Computable Passwords

Authors: Jeremiah Blocki, Manuel Blum, Anupam Datta, and Santosh Vempala

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
An interesting challenge for the cryptography community is to design authentication protocols that are so simple that a human can execute them without relying on a fully trusted computer. We propose several candidate authentication protocols for a setting in which the human user can only receive assistance from a semi-trusted computer - a computer that stores information and performs computations correctly but does not provide confidentiality. Our schemes use a semi-trusted computer to store and display public challenges C_i\in[n]^k. The human user memorizes a random secret mapping \sigma:[n]\rightarrow \mathbb{Z}_d and authenticates by computing responses f(\sigma(C_i)) to a sequence of public challenges where f:\mathbb{Z}_d^k\rightarrow \mathbb{Z}_d is a function that is easy for the human to evaluate. We prove that any statistical adversary needs to sample m=\tilde{\Omega}\paren{n^{s(f)}} challenge-response pairs to recover \sigma, for a security parameter s(f) that depends on two key properties of f. Our lower bound generalizes recent results of Feldman et al. [Feldman'15] who proved analogous results for the special case d=2. To obtain our results, we apply the general hypercontractivity theorem [O'Donnell'14] to lower bound the statistical dimension of the distribution over challenge-response pairs induced by f and \sigma. Our statistical dimension lower bounds apply to arbitrary functions f:\mathbb{Z}_d^k\rightarrow \mathbb{Z}_d (not just to functions that are easy for a human to evaluate). As an application, we propose a family of human computable password functions f_{k_1,k_2} in which the user needs to perform 2k_1+2k_2+1 primitive operations (e.g., adding two digits or remembering a secret value \sigma(i)), and we show that s(f) = \min{k_1+1, (k_2+1)/2}. For these schemes, we prove that forging passwords is equivalent to recovering the secret mapping. Thus, our human computable password schemes can maintain strong security guarantees even after an adversary has observed the user login to many different accounts.

Cite as

Jeremiah Blocki, Manuel Blum, Anupam Datta, and Santosh Vempala. Towards Human Computable Passwords. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 10:1-10:47, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{blocki_et_al:LIPIcs.ITCS.2017.10,
  author =	{Blocki, Jeremiah and Blum, Manuel and Datta, Anupam and Vempala, Santosh},
  title =	{{Towards Human Computable Passwords}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{10:1--10:47},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.10},
  URN =		{urn:nbn:de:0030-drops-81847},
  doi =		{10.4230/LIPIcs.ITCS.2017.10},
  annote =	{Keywords: Passwords, Cognitive Authentication, Human Computation, Planted Constraint Satisfaction Problem, Statistical Dimension}
}
  • Refine by Author
  • 9 Vempala, Santosh S.
  • 5 Guruswami, Venkatesan
  • 3 Jansen, Klaus
  • 3 Vempala, Santosh
  • 2 Anari, Nima
  • Show More...

  • Refine by Classification
  • 4 Theory of computation → Design and analysis of algorithms
  • 4 Theory of computation → Randomness, geometry and discrete structures
  • 3 Theory of computation → Random walks and Markov chains
  • 1 Mathematics of computing → Markov processes
  • 1 Mathematics of computing → Paths and connectivity problems
  • Show More...

  • Refine by Keyword
  • 3 Approximation Algorithms
  • 3 approximation algorithms
  • 2 Brain computation
  • 2 Linear Programming
  • 2 Optimization
  • Show More...

  • Refine by Type
  • 72 document
  • 1 volume

  • Refine by Publication Year
  • 53 2017
  • 4 2019
  • 3 2015
  • 3 2022
  • 2 2014
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail