Search Results

Documents authored by Reingold, Omer


Document
Generative Models of Huge Objects

Authors: Lunjia Hu, Inbal Rachel Livni Navon, and Omer Reingold

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
This work initiates the systematic study of explicit distributions that are indistinguishable from a single exponential-size combinatorial object. In this we extend the work of Goldreich, Goldwasser and Nussboim (SICOMP 2010) that focused on the implementation of huge objects that are indistinguishable from the uniform distribution, satisfying some global properties (which they coined truthfulness). Indistinguishability from a single object is motivated by the study of generative models in learning theory and regularity lemmas in graph theory. Problems that are well understood in the setting of pseudorandomness present significant challenges and at times are impossible when considering generative models of huge objects. We demonstrate the versatility of this study by providing a learning algorithm for huge indistinguishable objects in several natural settings including: dense functions and graphs with a truthfulness requirement on the number of ones in the function or edges in the graphs, and a version of the weak regularity lemma for sparse graphs that satisfy some global properties. These and other results generalize basic pseudorandom objects as well as notions introduced in algorithmic fairness. The results rely on notions and techniques from a variety of areas including learning theory, complexity theory, cryptography, and game theory.

Cite as

Lunjia Hu, Inbal Rachel Livni Navon, and Omer Reingold. Generative Models of Huge Objects. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 5:1-5:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hu_et_al:LIPIcs.CCC.2023.5,
  author =	{Hu, Lunjia and Livni Navon, Inbal Rachel and Reingold, Omer},
  title =	{{Generative Models of Huge Objects}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.5},
  URN =		{urn:nbn:de:0030-drops-182758},
  doi =		{10.4230/LIPIcs.CCC.2023.5},
  annote =	{Keywords: pseudorandomness, generative models, regularity lemma}
}
Document
From the Real Towards the Ideal: Risk Prediction in a Better World

Authors: Cynthia Dwork, Omer Reingold, and Guy N. Rothblum

Published in: LIPIcs, Volume 256, 4th Symposium on Foundations of Responsible Computing (FORC 2023)


Abstract
Prediction algorithms assign scores in [0,1] to individuals, often interpreted as "probabilities" of a positive outcome, for example, of repaying a loan or succeeding in a job. Success, however, rarely depends only on the individual: it is a function of the individual’s interaction with the environment, past and present. Environments do not treat all demographic groups equally. We initiate the study of corrective transformations τ that map predictors of success in the real world to predictors in a better world. In the language of algorithmic fairness, letting p^* denote the true probabilities of success in the real, unfair, world, we characterize the transformations τ for which it is feasible to find a predictor q̃ that is indistinguishable from τ(p^*). The problem is challenging because we do not have access to probabilities or even outcomes in a better world. Nor do we have access to probabilities p^* in the real world. The only data available for training are outcomes from the real world. We obtain a complete characterization of when it is possible to learn predictors that are indistinguishable from τ(p^*), in the form of a simple-to-state criterion describing necessary and sufficient conditions for doing so. This criterion is inextricably bound with the very existence of uncertainty.

Cite as

Cynthia Dwork, Omer Reingold, and Guy N. Rothblum. From the Real Towards the Ideal: Risk Prediction in a Better World. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 1:1-1:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{dwork_et_al:LIPIcs.FORC.2023.1,
  author =	{Dwork, Cynthia and Reingold, Omer and Rothblum, Guy N.},
  title =	{{From the Real Towards the Ideal: Risk Prediction in a Better World}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{1:1--1:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.1},
  URN =		{urn:nbn:de:0030-drops-179224},
  doi =		{10.4230/LIPIcs.FORC.2023.1},
  annote =	{Keywords: Algorithmic Fairness, Affirmative Action, Learning, Predictions, Multicalibration, Outcome Indistinguishability}
}
Document
Bidding Strategies for Proportional Representation in Advertisement Campaigns

Authors: Inbal Livni Navon, Charlotte Peale, Omer Reingold, and Judy Hanwen Shen

Published in: LIPIcs, Volume 256, 4th Symposium on Foundations of Responsible Computing (FORC 2023)


Abstract
Many companies rely on advertising platforms such as Google, Facebook, or Instagram to recruit a large and diverse applicant pool for job openings. Prior works have shown that equitable bidding may not result in equitable outcomes due to heterogeneous levels of competition for different types of individuals. Suggestions have been made to address this problem via revisions to the advertising platform. However, it may be challenging to convince platforms to undergo a costly re-vamp of their system, and in addition it might not offer the flexibility necessary to capture the many types of fairness notions and other constraints that advertisers would like to ensure. Instead, we consider alterations that make no change to the platform mechanism and instead change the bidding strategies used by advertisers. We compare two natural fairness objectives: one in which the advertisers must treat groups equally when bidding in order to achieve a yield with group-parity guarantees, and another in which the bids are not constrained and only the yield must satisfy parity constraints. We show that requiring parity with respect to both bids and yield can result in an arbitrarily large decrease in efficiency compared to requiring equal yield proportions alone. We find that autobidding is a natural way to realize this latter objective and show how existing work in this area can be extended to provide efficient bidding strategies that provide high utility while satisfying group parity constraints as well as deterministic and randomized rounding techniques to uphold these guarantees. Finally, we demonstrate the effectiveness of our proposed solutions on data adapted from a real-world employment dataset.

Cite as

Inbal Livni Navon, Charlotte Peale, Omer Reingold, and Judy Hanwen Shen. Bidding Strategies for Proportional Representation in Advertisement Campaigns. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 3:1-3:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{navon_et_al:LIPIcs.FORC.2023.3,
  author =	{Navon, Inbal Livni and Peale, Charlotte and Reingold, Omer and Shen, Judy Hanwen},
  title =	{{Bidding Strategies for Proportional Representation in Advertisement Campaigns}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{3:1--3:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.3},
  URN =		{urn:nbn:de:0030-drops-179245},
  doi =		{10.4230/LIPIcs.FORC.2023.3},
  annote =	{Keywords: Algorithmic fairness, diversity, advertisement auctions}
}
Document
Loss Minimization Through the Lens Of Outcome Indistinguishability

Authors: Parikshit Gopalan, Lunjia Hu, Michael P. Kim, Omer Reingold, and Udi Wieder

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
We present a new perspective on loss minimization and the recent notion of Omniprediction through the lens of Outcome Indistingusihability. For a collection of losses and hypothesis class, omniprediction requires that a predictor provide a loss-minimization guarantee simultaneously for every loss in the collection compared to the best (loss-specific) hypothesis in the class. We present a generic template to learn predictors satisfying a guarantee we call Loss Outcome Indistinguishability. For a set of statistical tests - based on a collection of losses and hypothesis class - a predictor is Loss OI if it is indistinguishable (according to the tests) from Nature’s true probabilities over outcomes. By design, Loss OI implies omniprediction in a direct and intuitive manner. We simplify Loss OI further, decomposing it into a calibration condition plus multiaccuracy for a class of functions derived from the loss and hypothesis classes. By careful analysis of this class, we give efficient constructions of omnipredictors for interesting classes of loss functions, including non-convex losses. This decomposition highlights the utility of a new multi-group fairness notion that we call calibrated multiaccuracy, which lies in between multiaccuracy and multicalibration. We show that calibrated multiaccuracy implies Loss OI for the important set of convex losses arising from Generalized Linear Models, without requiring full multicalibration. For such losses, we show an equivalence between our computational notion of Loss OI and a geometric notion of indistinguishability, formulated as Pythagorean theorems in the associated Bregman divergence. We give an efficient algorithm for calibrated multiaccuracy with computational complexity comparable to that of multiaccuracy. In all, calibrated multiaccuracy offers an interesting tradeoff point between efficiency and generality in the omniprediction landscape.

Cite as

Parikshit Gopalan, Lunjia Hu, Michael P. Kim, Omer Reingold, and Udi Wieder. Loss Minimization Through the Lens Of Outcome Indistinguishability. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 60:1-60:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{gopalan_et_al:LIPIcs.ITCS.2023.60,
  author =	{Gopalan, Parikshit and Hu, Lunjia and Kim, Michael P. and Reingold, Omer and Wieder, Udi},
  title =	{{Loss Minimization Through the Lens Of Outcome Indistinguishability}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{60:1--60:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.60},
  URN =		{urn:nbn:de:0030-drops-175635},
  doi =		{10.4230/LIPIcs.ITCS.2023.60},
  annote =	{Keywords: Loss Minimization, Indistinguishability}
}
Document
Leximax Approximations and Representative Cohort Selection

Authors: Monika Henzinger, Charlotte Peale, Omer Reingold, and Judy Hanwen Shen

Published in: LIPIcs, Volume 218, 3rd Symposium on Foundations of Responsible Computing (FORC 2022)


Abstract
Finding a representative cohort from a broad pool of candidates is a goal that arises in many contexts such as choosing governing committees and consumer panels. While there are many ways to define the degree to which a cohort represents a population, a very appealing solution concept is lexicographic maximality (leximax) which offers a natural (pareto-optimal like) interpretation that the utility of no population can be increased without decreasing the utility of a population that is already worse off. However, finding a leximax solution can be highly dependent on small variations in the utility of certain groups. In this work, we explore new notions of approximate leximax solutions with three distinct motivations: better algorithmic efficiency, exploiting significant utility improvements, and robustness to noise. Among other definitional contributions, we give a new notion of an approximate leximax that satisfies a similarly appealing semantic interpretation and relate it to algorithmically-feasible approximate leximax notions. When group utilities are linear over cohort candidates, we give an efficient polynomial-time algorithm for finding a leximax distribution over cohort candidates in the exact as well as in the approximate setting. Furthermore, we show that finding an integer solution to leximax cohort selection with linear utilities is NP-Hard.

Cite as

Monika Henzinger, Charlotte Peale, Omer Reingold, and Judy Hanwen Shen. Leximax Approximations and Representative Cohort Selection. In 3rd Symposium on Foundations of Responsible Computing (FORC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 218, pp. 2:1-2:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{henzinger_et_al:LIPIcs.FORC.2022.2,
  author =	{Henzinger, Monika and Peale, Charlotte and Reingold, Omer and Shen, Judy Hanwen},
  title =	{{Leximax Approximations and Representative Cohort Selection}},
  booktitle =	{3rd Symposium on Foundations of Responsible Computing (FORC 2022)},
  pages =	{2:1--2:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-226-6},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{218},
  editor =	{Celis, L. Elisa},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2022.2},
  URN =		{urn:nbn:de:0030-drops-165258},
  doi =		{10.4230/LIPIcs.FORC.2022.2},
  annote =	{Keywords: fairness, cohort selection, leximin, maxmin}
}
Document
Omnipredictors

Authors: Parikshit Gopalan, Adam Tauman Kalai, Omer Reingold, Vatsal Sharan, and Udi Wieder

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
Loss minimization is a dominant paradigm in machine learning, where a predictor is trained to minimize some loss function that depends on an uncertain event (e.g., "will it rain tomorrow?"). Different loss functions imply different learning algorithms and, at times, very different predictors. While widespread and appealing, a clear drawback of this approach is that the loss function may not be known at the time of learning, requiring the algorithm to use a best-guess loss function. Alternatively, the same classifier may be used to inform multiple decisions, which correspond to multiple loss functions, requiring multiple learning algorithms to be run on the same data. We suggest a rigorous new paradigm for loss minimization in machine learning where the loss function can be ignored at the time of learning and only be taken into account when deciding an action. We introduce the notion of an (L,𝒞)-omnipredictor, which could be used to optimize any loss in a family L. Once the loss function is set, the outputs of the predictor can be post-processed (a simple univariate data-independent transformation of individual predictions) to do well compared with any hypothesis from the class C. The post processing is essentially what one would perform if the outputs of the predictor were true probabilities of the uncertain events. In a sense, omnipredictors extract all the predictive power from the class 𝒞, irrespective of the loss function in L. We show that such "loss-oblivious" learning is feasible through a connection to multicalibration, a notion introduced in the context of algorithmic fairness. A multicalibrated predictor doesn’t aim to minimize some loss function, but rather to make calibrated predictions, even when conditioned on inputs lying in certain sets c belonging to a family 𝒞 which is weakly learnable. We show that a 𝒞-multicalibrated predictor is also an (L,𝒞)-omnipredictor, where L contains all convex loss functions with some mild Lipschitz conditions. The predictors are even omnipredictors with respect to sparse linear combinations of functions in 𝒞. As a corollary, we deduce that distribution-specific weak agnostic learning is complete for a large class of loss minimization tasks. In addition, we show how multicalibration can be viewed as a solution concept for agnostic boosting, shedding new light on past results. Finally, we transfer our insights back to the context of algorithmic fairness by providing omnipredictors for multi-group loss minimization.

Cite as

Parikshit Gopalan, Adam Tauman Kalai, Omer Reingold, Vatsal Sharan, and Udi Wieder. Omnipredictors. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 79:1-79:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{gopalan_et_al:LIPIcs.ITCS.2022.79,
  author =	{Gopalan, Parikshit and Kalai, Adam Tauman and Reingold, Omer and Sharan, Vatsal and Wieder, Udi},
  title =	{{Omnipredictors}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{79:1--79:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.79},
  URN =		{urn:nbn:de:0030-drops-156755},
  doi =		{10.4230/LIPIcs.ITCS.2022.79},
  annote =	{Keywords: Loss-minimzation, multi-group fairness, agnostic learning, boosting}
}
Document
RANDOM
Pseudorandom Generators for Read-Once Monotone Branching Programs

Authors: Dean Doron, Raghu Meka, Omer Reingold, Avishay Tal, and Salil Vadhan

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
Motivated by the derandomization of space-bounded computation, there has been a long line of work on constructing pseudorandom generators (PRGs) against various forms of read-once branching programs (ROBPs), with a goal of improving the O(log² n) seed length of Nisan’s classic construction [Noam Nisan, 1992] to the optimal O(log n). In this work, we construct an explicit PRG with seed length Õ(log n) for constant-width ROBPs that are monotone, meaning that the states at each time step can be ordered so that edges with the same labels never cross each other. Equivalently, for each fixed input, the transition functions are a monotone function of the state. This result is complementary to a line of work that gave PRGs with seed length O(log n) for (ordered) permutation ROBPs of constant width [Braverman et al., 2014; Koucký et al., 2011; De, 2011; Thomas Steinke, 2012], since the monotonicity constraint can be seen as the "opposite" of the permutation constraint. Our PRG also works for monotone ROBPs that can read the input bits in any order, which are strictly more powerful than read-once AC⁰. Our PRG achieves better parameters (in terms of the dependence on the depth of the circuit) than the best previous pseudorandom generator for read-once AC⁰, due to Doron, Hatami, and Hoza [Doron et al., 2019]. Our pseudorandom generator construction follows Ajtai and Wigderson’s approach of iterated pseudorandom restrictions [Ajtai and Wigderson, 1989; Gopalan et al., 2012]. We give a randomness-efficient width-reduction process which proves that the branching program simplifies to an O(log n)-junta after only O(log log n) independent applications of the Forbes-Kelley pseudorandom restrictions [Michael A. Forbes and Zander Kelley, 2018].

Cite as

Dean Doron, Raghu Meka, Omer Reingold, Avishay Tal, and Salil Vadhan. Pseudorandom Generators for Read-Once Monotone Branching Programs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 58:1-58:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{doron_et_al:LIPIcs.APPROX/RANDOM.2021.58,
  author =	{Doron, Dean and Meka, Raghu and Reingold, Omer and Tal, Avishay and Vadhan, Salil},
  title =	{{Pseudorandom Generators for Read-Once Monotone Branching Programs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{58:1--58:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.58},
  URN =		{urn:nbn:de:0030-drops-147513},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.58},
  annote =	{Keywords: Branching programs, pseudorandom generators, constant depth circuits}
}
Document
Bounded-Leakage Differential Privacy

Authors: Katrina Ligett, Charlotte Peale, and Omer Reingold

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
We introduce and study a relaxation of differential privacy [Dwork et al., 2006] that accounts for mechanisms that leak some additional, bounded information about the database. We apply this notion to reason about two distinct settings where the notion of differential privacy is of limited use. First, we consider cases, such as in the 2020 US Census [Abowd, 2018], in which some information about the database is released exactly or with small noise. Second, we consider the accumulation of privacy harms for an individual across studies that may not even include the data of this individual. The tools that we develop for bounded-leakage differential privacy allow us reason about privacy loss in these settings, and to show that individuals preserve some meaningful protections.

Cite as

Katrina Ligett, Charlotte Peale, and Omer Reingold. Bounded-Leakage Differential Privacy. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ligett_et_al:LIPIcs.FORC.2020.10,
  author =	{Ligett, Katrina and Peale, Charlotte and Reingold, Omer},
  title =	{{Bounded-Leakage Differential Privacy}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020.10},
  URN =		{urn:nbn:de:0030-drops-120265},
  doi =		{10.4230/LIPIcs.FORC.2020.10},
  annote =	{Keywords: differential privacy, applications, privacy, leakage, auxiliary information}
}
Document
RANDOM
Deterministic Approximation of Random Walks in Small Space

Authors: Jack Murtagh, Omer Reingold, Aaron Sidford, and Salil Vadhan

Published in: LIPIcs, Volume 145, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)


Abstract
We give a deterministic, nearly logarithmic-space algorithm that given an undirected graph G, a positive integer r, and a set S of vertices, approximates the conductance of S in the r-step random walk on G to within a factor of 1+epsilon, where epsilon>0 is an arbitrarily small constant. More generally, our algorithm computes an epsilon-spectral approximation to the normalized Laplacian of the r-step walk. Our algorithm combines the derandomized square graph operation [Eyal Rozenman and Salil Vadhan, 2005], which we recently used for solving Laplacian systems in nearly logarithmic space [Murtagh et al., 2017], with ideas from [Cheng et al., 2015], which gave an algorithm that is time-efficient (while ours is space-efficient) and randomized (while ours is deterministic) for the case of even r (while ours works for all r). Along the way, we provide some new results that generalize technical machinery and yield improvements over previous work. First, we obtain a nearly linear-time randomized algorithm for computing a spectral approximation to the normalized Laplacian for odd r. Second, we define and analyze a generalization of the derandomized square for irregular graphs and for sparsifying the product of two distinct graphs. As part of this generalization, we also give a strongly explicit construction of expander graphs of every size.

Cite as

Jack Murtagh, Omer Reingold, Aaron Sidford, and Salil Vadhan. Deterministic Approximation of Random Walks in Small Space. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 42:1-42:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{murtagh_et_al:LIPIcs.APPROX-RANDOM.2019.42,
  author =	{Murtagh, Jack and Reingold, Omer and Sidford, Aaron and Vadhan, Salil},
  title =	{{Deterministic Approximation of Random Walks in Small Space}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)},
  pages =	{42:1--42:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-125-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{145},
  editor =	{Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.42},
  URN =		{urn:nbn:de:0030-drops-112577},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2019.42},
  annote =	{Keywords: random walks, space complexity, derandomization, spectral approximation, expander graphs}
}
Document
On the Communication Complexity of Key-Agreement Protocols

Authors: Iftach Haitner, Noam Mazor, Rotem Oshman, Omer Reingold, and Amir Yehudayoff

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
Key-agreement protocols whose security is proven in the random oracle model are an important alternative to protocols based on public-key cryptography. In the random oracle model, the parties and the eavesdropper have access to a shared random function (an "oracle"), but the parties are limited in the number of queries they can make to the oracle. The random oracle serves as an abstraction for black-box access to a symmetric cryptographic primitive, such as a collision resistant hash. Unfortunately, as shown by Impagliazzo and Rudich [STOC '89] and Barak and Mahmoody [Crypto '09], such protocols can only guarantee limited secrecy: the key of any l-query protocol can be revealed by an O(l^2)-query adversary. This quadratic gap between the query complexity of the honest parties and the eavesdropper matches the gap obtained by the Merkle's Puzzles protocol of Merkle [CACM '78]. In this work we tackle a new aspect of key-agreement protocols in the random oracle model: their communication complexity. In Merkle's Puzzles, to obtain secrecy against an eavesdropper that makes roughly l^2 queries, the honest parties need to exchange Omega(l) bits. We show that for protocols with certain natural properties, ones that Merkle's Puzzle has, such high communication is unavoidable. Specifically, this is the case if the honest parties' queries are uniformly random, or alternatively if the protocol uses non-adaptive queries and has only two rounds. Our proof for the first setting uses a novel reduction from the set-disjointness problem in two-party communication complexity. For the second setting we prove the lower bound directly, using information-theoretic arguments. Understanding the communication complexity of protocols whose security is proven (in the random-oracle model) is an important question in the study of practical protocols. Our results and proof techniques are a first step in this direction.

Cite as

Iftach Haitner, Noam Mazor, Rotem Oshman, Omer Reingold, and Amir Yehudayoff. On the Communication Complexity of Key-Agreement Protocols. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 40:1-40:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{haitner_et_al:LIPIcs.ITCS.2019.40,
  author =	{Haitner, Iftach and Mazor, Noam and Oshman, Rotem and Reingold, Omer and Yehudayoff, Amir},
  title =	{{On the Communication Complexity of Key-Agreement Protocols}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{40:1--40:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.40},
  URN =		{urn:nbn:de:0030-drops-101335},
  doi =		{10.4230/LIPIcs.ITCS.2019.40},
  annote =	{Keywords: key agreement, random oracle, communication complexity, Merkle's puzzles}
}
Document
Efficient Batch Verification for UP

Authors: Omer Reingold, Guy N. Rothblum, and Ron D. Rothblum

Published in: LIPIcs, Volume 102, 33rd Computational Complexity Conference (CCC 2018)


Abstract
Consider a setting in which a prover wants to convince a verifier of the correctness of k NP statements. For example, the prover wants to convince the verifier that k given integers N_1,...,N_k are all RSA moduli (i.e., products of equal length primes). Clearly this problem can be solved by simply having the prover send the k NP witnesses, but this involves a lot of communication. Can interaction help? In particular, is it possible to construct interactive proofs for this task whose communication grows sub-linearly with k? Our main result is such an interactive proof for verifying the correctness of any k UP statements (i.e., NP statements that have a unique witness). The proof-system uses only a constant number of rounds and the communication complexity is k^delta * poly(m), where delta>0 is an arbitrarily small constant, m is the length of a single witness, and the poly term refers to a fixed polynomial that only depends on the language and not on delta. The (honest) prover strategy can be implemented in polynomial-time given access to the k (unique) witnesses. Our proof leverages "interactive witness verification" (IWV), a new type of proof-system that may be of independent interest. An IWV is a proof-system in which the verifier needs to verify the correctness of an NP statement using: (i) a sublinear number of queries to an alleged NP witness, and (ii) a short interaction with a powerful but untrusted prover. In contrast to the setting of PCPs and Interactive PCPs, here the verifier only has access to the raw NP witness, rather than some encoding thereof.

Cite as

Omer Reingold, Guy N. Rothblum, and Ron D. Rothblum. Efficient Batch Verification for UP. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 22:1-22:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{reingold_et_al:LIPIcs.CCC.2018.22,
  author =	{Reingold, Omer and Rothblum, Guy N. and Rothblum, Ron D.},
  title =	{{Efficient Batch Verification for UP}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{22:1--22:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.22},
  URN =		{urn:nbn:de:0030-drops-88681},
  doi =		{10.4230/LIPIcs.CCC.2018.22},
  annote =	{Keywords: Interactive Proof, Batch Verification, Unique Solution}
}
Document
Deterministic Coupon Collection and Better Strong Dispersers

Authors: Raghu Meka, Omer Reingold, and Yuan Zhou

Published in: LIPIcs, Volume 28, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)


Abstract
Hashing is one of the main techniques in data processing and algorithm design for very large data sets. While random hash functions satisfy most desirable properties, it is often too expensive to store a fully random hash function. Motivated by this, much attention has been given to designing small families of hash functions suitable for various applications. In this work, we study the question of designing space-efficient hash families H = {h:[U] -> [N]} with the natural property of 'covering': H is said to be covering if any set of Omega(N log N) distinct items from the universe (the "coupon-collector limit") are hashed to cover all N bins by most hash functions in H. We give an explicit covering family H of size poly(N) (which is optimal), so that hash functions in H can be specified efficiently by O(log N) bits. We build covering hash functions by drawing a connection to "dispersers", which are quite well-studied and have a variety of applications themselves. We in fact need strong dispersers and we give new constructions of strong dispersers which may be of independent interest. Specifically, we construct strong dispersers with optimal entropy loss in the high min-entropy, but very small error (poly(n)/2^n for n bit sources) regimes. We also provide a strong disperser construction with constant error but for any min-entropy. Our constructions achieve these by using part of the source to replace seed from previous non-strong constructions in surprising ways. In doing so, we take two of the few constructions of dispersers with parameters better than known extractors and make them strong.

Cite as

Raghu Meka, Omer Reingold, and Yuan Zhou. Deterministic Coupon Collection and Better Strong Dispersers. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 872-884, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{meka_et_al:LIPIcs.APPROX-RANDOM.2014.872,
  author =	{Meka, Raghu and Reingold, Omer and Zhou, Yuan},
  title =	{{Deterministic Coupon Collection and Better Strong Dispersers}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{872--884},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.872},
  URN =		{urn:nbn:de:0030-drops-47440},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.872},
  annote =	{Keywords: Coupon collection; dispersers, strong dispersers, hashing, pseudorandomness}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail