21 Search Results for "Reingold, Omer"


Document
Generative Models of Huge Objects

Authors: Lunjia Hu, Inbal Rachel Livni Navon, and Omer Reingold

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
This work initiates the systematic study of explicit distributions that are indistinguishable from a single exponential-size combinatorial object. In this we extend the work of Goldreich, Goldwasser and Nussboim (SICOMP 2010) that focused on the implementation of huge objects that are indistinguishable from the uniform distribution, satisfying some global properties (which they coined truthfulness). Indistinguishability from a single object is motivated by the study of generative models in learning theory and regularity lemmas in graph theory. Problems that are well understood in the setting of pseudorandomness present significant challenges and at times are impossible when considering generative models of huge objects. We demonstrate the versatility of this study by providing a learning algorithm for huge indistinguishable objects in several natural settings including: dense functions and graphs with a truthfulness requirement on the number of ones in the function or edges in the graphs, and a version of the weak regularity lemma for sparse graphs that satisfy some global properties. These and other results generalize basic pseudorandom objects as well as notions introduced in algorithmic fairness. The results rely on notions and techniques from a variety of areas including learning theory, complexity theory, cryptography, and game theory.

Cite as

Lunjia Hu, Inbal Rachel Livni Navon, and Omer Reingold. Generative Models of Huge Objects. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 5:1-5:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hu_et_al:LIPIcs.CCC.2023.5,
  author =	{Hu, Lunjia and Livni Navon, Inbal Rachel and Reingold, Omer},
  title =	{{Generative Models of Huge Objects}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.5},
  URN =		{urn:nbn:de:0030-drops-182758},
  doi =		{10.4230/LIPIcs.CCC.2023.5},
  annote =	{Keywords: pseudorandomness, generative models, regularity lemma}
}
Document
From the Real Towards the Ideal: Risk Prediction in a Better World

Authors: Cynthia Dwork, Omer Reingold, and Guy N. Rothblum

Published in: LIPIcs, Volume 256, 4th Symposium on Foundations of Responsible Computing (FORC 2023)


Abstract
Prediction algorithms assign scores in [0,1] to individuals, often interpreted as "probabilities" of a positive outcome, for example, of repaying a loan or succeeding in a job. Success, however, rarely depends only on the individual: it is a function of the individual’s interaction with the environment, past and present. Environments do not treat all demographic groups equally. We initiate the study of corrective transformations τ that map predictors of success in the real world to predictors in a better world. In the language of algorithmic fairness, letting p^* denote the true probabilities of success in the real, unfair, world, we characterize the transformations τ for which it is feasible to find a predictor q̃ that is indistinguishable from τ(p^*). The problem is challenging because we do not have access to probabilities or even outcomes in a better world. Nor do we have access to probabilities p^* in the real world. The only data available for training are outcomes from the real world. We obtain a complete characterization of when it is possible to learn predictors that are indistinguishable from τ(p^*), in the form of a simple-to-state criterion describing necessary and sufficient conditions for doing so. This criterion is inextricably bound with the very existence of uncertainty.

Cite as

Cynthia Dwork, Omer Reingold, and Guy N. Rothblum. From the Real Towards the Ideal: Risk Prediction in a Better World. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 1:1-1:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{dwork_et_al:LIPIcs.FORC.2023.1,
  author =	{Dwork, Cynthia and Reingold, Omer and Rothblum, Guy N.},
  title =	{{From the Real Towards the Ideal: Risk Prediction in a Better World}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{1:1--1:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.1},
  URN =		{urn:nbn:de:0030-drops-179224},
  doi =		{10.4230/LIPIcs.FORC.2023.1},
  annote =	{Keywords: Algorithmic Fairness, Affirmative Action, Learning, Predictions, Multicalibration, Outcome Indistinguishability}
}
Document
Bidding Strategies for Proportional Representation in Advertisement Campaigns

Authors: Inbal Livni Navon, Charlotte Peale, Omer Reingold, and Judy Hanwen Shen

Published in: LIPIcs, Volume 256, 4th Symposium on Foundations of Responsible Computing (FORC 2023)


Abstract
Many companies rely on advertising platforms such as Google, Facebook, or Instagram to recruit a large and diverse applicant pool for job openings. Prior works have shown that equitable bidding may not result in equitable outcomes due to heterogeneous levels of competition for different types of individuals. Suggestions have been made to address this problem via revisions to the advertising platform. However, it may be challenging to convince platforms to undergo a costly re-vamp of their system, and in addition it might not offer the flexibility necessary to capture the many types of fairness notions and other constraints that advertisers would like to ensure. Instead, we consider alterations that make no change to the platform mechanism and instead change the bidding strategies used by advertisers. We compare two natural fairness objectives: one in which the advertisers must treat groups equally when bidding in order to achieve a yield with group-parity guarantees, and another in which the bids are not constrained and only the yield must satisfy parity constraints. We show that requiring parity with respect to both bids and yield can result in an arbitrarily large decrease in efficiency compared to requiring equal yield proportions alone. We find that autobidding is a natural way to realize this latter objective and show how existing work in this area can be extended to provide efficient bidding strategies that provide high utility while satisfying group parity constraints as well as deterministic and randomized rounding techniques to uphold these guarantees. Finally, we demonstrate the effectiveness of our proposed solutions on data adapted from a real-world employment dataset.

Cite as

Inbal Livni Navon, Charlotte Peale, Omer Reingold, and Judy Hanwen Shen. Bidding Strategies for Proportional Representation in Advertisement Campaigns. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 3:1-3:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{navon_et_al:LIPIcs.FORC.2023.3,
  author =	{Navon, Inbal Livni and Peale, Charlotte and Reingold, Omer and Shen, Judy Hanwen},
  title =	{{Bidding Strategies for Proportional Representation in Advertisement Campaigns}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{3:1--3:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.3},
  URN =		{urn:nbn:de:0030-drops-179245},
  doi =		{10.4230/LIPIcs.FORC.2023.3},
  annote =	{Keywords: Algorithmic fairness, diversity, advertisement auctions}
}
Document
Multiplicative Metric Fairness Under Composition

Authors: Milan Mossé

Published in: LIPIcs, Volume 256, 4th Symposium on Foundations of Responsible Computing (FORC 2023)


Abstract
Dwork, Hardt, Pitassi, Reingold, & Zemel [Dwork et al., 2012] introduced two notions of fairness, each of which is meant to formalize the notion of similar treatment for similarly qualified individuals. The first of these notions, which we call additive metric fairness, has received much attention in subsequent work studying the fairness of a system composed of classifiers which are fair when considered in isolation [Chawla and Jagadeesan, 2020; Chawla et al., 2022; Dwork and Ilvento, 2018; Dwork et al., 2020; Ilvento et al., 2020] and in work studying the relationship between fair treatment of individuals and fair treatment of groups [Dwork et al., 2012; Dwork and Ilvento, 2018; Kim et al., 2018]. Here, we extend these lines of research to the second, less-studied notion, which we call multiplicative metric fairness. In particular, we exactly characterize the fairness of conjunctions and disjunctions of multiplicative metric fair classifiers, and the extent to which a classifier which satisfies multiplicative metric fairness also treats groups fairly. This characterization reveals that whereas additive metric fairness becomes easier to satisfy when probabilities of acceptance are small, leading to unfairness under functional and group compositions, multiplicative metric fairness is better-behaved, due to its scale-invariance.

Cite as

Milan Mossé. Multiplicative Metric Fairness Under Composition. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 4:1-4:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{mosse:LIPIcs.FORC.2023.4,
  author =	{Moss\'{e}, Milan},
  title =	{{Multiplicative Metric Fairness Under Composition}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{4:1--4:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.4},
  URN =		{urn:nbn:de:0030-drops-179250},
  doi =		{10.4230/LIPIcs.FORC.2023.4},
  annote =	{Keywords: algorithmic fairness, metric fairness, fairness under composition}
}
Document
Loss Minimization Through the Lens Of Outcome Indistinguishability

Authors: Parikshit Gopalan, Lunjia Hu, Michael P. Kim, Omer Reingold, and Udi Wieder

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
We present a new perspective on loss minimization and the recent notion of Omniprediction through the lens of Outcome Indistingusihability. For a collection of losses and hypothesis class, omniprediction requires that a predictor provide a loss-minimization guarantee simultaneously for every loss in the collection compared to the best (loss-specific) hypothesis in the class. We present a generic template to learn predictors satisfying a guarantee we call Loss Outcome Indistinguishability. For a set of statistical tests - based on a collection of losses and hypothesis class - a predictor is Loss OI if it is indistinguishable (according to the tests) from Nature’s true probabilities over outcomes. By design, Loss OI implies omniprediction in a direct and intuitive manner. We simplify Loss OI further, decomposing it into a calibration condition plus multiaccuracy for a class of functions derived from the loss and hypothesis classes. By careful analysis of this class, we give efficient constructions of omnipredictors for interesting classes of loss functions, including non-convex losses. This decomposition highlights the utility of a new multi-group fairness notion that we call calibrated multiaccuracy, which lies in between multiaccuracy and multicalibration. We show that calibrated multiaccuracy implies Loss OI for the important set of convex losses arising from Generalized Linear Models, without requiring full multicalibration. For such losses, we show an equivalence between our computational notion of Loss OI and a geometric notion of indistinguishability, formulated as Pythagorean theorems in the associated Bregman divergence. We give an efficient algorithm for calibrated multiaccuracy with computational complexity comparable to that of multiaccuracy. In all, calibrated multiaccuracy offers an interesting tradeoff point between efficiency and generality in the omniprediction landscape.

Cite as

Parikshit Gopalan, Lunjia Hu, Michael P. Kim, Omer Reingold, and Udi Wieder. Loss Minimization Through the Lens Of Outcome Indistinguishability. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 60:1-60:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{gopalan_et_al:LIPIcs.ITCS.2023.60,
  author =	{Gopalan, Parikshit and Hu, Lunjia and Kim, Michael P. and Reingold, Omer and Wieder, Udi},
  title =	{{Loss Minimization Through the Lens Of Outcome Indistinguishability}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{60:1--60:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.60},
  URN =		{urn:nbn:de:0030-drops-175635},
  doi =		{10.4230/LIPIcs.ITCS.2023.60},
  annote =	{Keywords: Loss Minimization, Indistinguishability}
}
Document
Comparative Learning: A Sample Complexity Theory for Two Hypothesis Classes

Authors: Lunjia Hu and Charlotte Peale

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
In many learning theory problems, a central role is played by a hypothesis class: we might assume that the data is labeled according to a hypothesis in the class (usually referred to as the realizable setting), or we might evaluate the learned model by comparing it with the best hypothesis in the class (the agnostic setting). Taking a step beyond these classic setups that involve only a single hypothesis class, we study a variety of problems that involve two hypothesis classes simultaneously. We introduce comparative learning as a combination of the realizable and agnostic settings in PAC learning: given two binary hypothesis classes S and B, we assume that the data is labeled according to a hypothesis in the source class S and require the learned model to achieve an accuracy comparable to the best hypothesis in the benchmark class B. Even when both S and B have infinite VC dimensions, comparative learning can still have a small sample complexity. We show that the sample complexity of comparative learning is characterized by the mutual VC dimension VC(S,B) which we define to be the maximum size of a subset shattered by both S and B. We also show a similar result in the online setting, where we give a regret characterization in terms of the analogous mutual Littlestone dimension Ldim(S,B). These results also hold for partial hypotheses. We additionally show that the insights necessary to characterize the sample complexity of comparative learning can be applied to other tasks involving two hypothesis classes. In particular, we characterize the sample complexity of realizable multiaccuracy and multicalibration using the mutual fat-shattering dimension, an analogue of the mutual VC dimension for real-valued hypotheses. This not only solves an open problem proposed by Hu, Peale, Reingold (2022), but also leads to independently interesting results extending classic ones about regression, boosting, and covering number to our two-hypothesis-class setting.

Cite as

Lunjia Hu and Charlotte Peale. Comparative Learning: A Sample Complexity Theory for Two Hypothesis Classes. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 72:1-72:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hu_et_al:LIPIcs.ITCS.2023.72,
  author =	{Hu, Lunjia and Peale, Charlotte},
  title =	{{Comparative Learning: A Sample Complexity Theory for Two Hypothesis Classes}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{72:1--72:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.72},
  URN =		{urn:nbn:de:0030-drops-175752},
  doi =		{10.4230/LIPIcs.ITCS.2023.72},
  annote =	{Keywords: Comparative learning, mutual VC dimension, realizable multiaccuracy and multicalibration, sample complexity}
}
Document
Making Decisions Under Outcome Performativity

Authors: Michael P. Kim and Juan C. Perdomo

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
Decision-makers often act in response to data-driven predictions, with the goal of achieving favorable outcomes. In such settings, predictions don’t passively forecast the future; instead, predictions actively shape the distribution of outcomes they are meant to predict. This performative prediction setting [Brown et al., 2022] raises new challenges for learning "optimal" decision rules. In particular, existing solution concepts do not address the apparent tension between the goals of forecasting outcomes accurately and steering individuals to achieve desirable outcomes. To contend with this concern, we introduce a new optimality concept - performative omniprediction - adapted from the supervised (non-performative) learning setting [Gopalan et al., 2022]. A performative omnipredictor is a single predictor that simultaneously encodes the optimal decision rule with respect to many possibly-competing objectives. Our main result demonstrates that efficient performative omnipredictors exist, under a natural restriction of performative prediction, which we call outcome performativity. On a technical level, our results follow by carefully generalizing the notion of outcome indistinguishability [Cynthia Dwork et al., 2021] to the outcome performative setting. From an appropriate notion of Performative OI, we recover many consequences known to hold in the supervised setting, such as omniprediction and universal adaptability [Kim et al., 2022].

Cite as

Michael P. Kim and Juan C. Perdomo. Making Decisions Under Outcome Performativity. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 79:1-79:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.ITCS.2023.79,
  author =	{Kim, Michael P. and Perdomo, Juan C.},
  title =	{{Making Decisions Under Outcome Performativity}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{79:1--79:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.79},
  URN =		{urn:nbn:de:0030-drops-175824},
  doi =		{10.4230/LIPIcs.ITCS.2023.79},
  annote =	{Keywords: performative prediction, outcome indistinguishability}
}
Document
Leximax Approximations and Representative Cohort Selection

Authors: Monika Henzinger, Charlotte Peale, Omer Reingold, and Judy Hanwen Shen

Published in: LIPIcs, Volume 218, 3rd Symposium on Foundations of Responsible Computing (FORC 2022)


Abstract
Finding a representative cohort from a broad pool of candidates is a goal that arises in many contexts such as choosing governing committees and consumer panels. While there are many ways to define the degree to which a cohort represents a population, a very appealing solution concept is lexicographic maximality (leximax) which offers a natural (pareto-optimal like) interpretation that the utility of no population can be increased without decreasing the utility of a population that is already worse off. However, finding a leximax solution can be highly dependent on small variations in the utility of certain groups. In this work, we explore new notions of approximate leximax solutions with three distinct motivations: better algorithmic efficiency, exploiting significant utility improvements, and robustness to noise. Among other definitional contributions, we give a new notion of an approximate leximax that satisfies a similarly appealing semantic interpretation and relate it to algorithmically-feasible approximate leximax notions. When group utilities are linear over cohort candidates, we give an efficient polynomial-time algorithm for finding a leximax distribution over cohort candidates in the exact as well as in the approximate setting. Furthermore, we show that finding an integer solution to leximax cohort selection with linear utilities is NP-Hard.

Cite as

Monika Henzinger, Charlotte Peale, Omer Reingold, and Judy Hanwen Shen. Leximax Approximations and Representative Cohort Selection. In 3rd Symposium on Foundations of Responsible Computing (FORC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 218, pp. 2:1-2:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{henzinger_et_al:LIPIcs.FORC.2022.2,
  author =	{Henzinger, Monika and Peale, Charlotte and Reingold, Omer and Shen, Judy Hanwen},
  title =	{{Leximax Approximations and Representative Cohort Selection}},
  booktitle =	{3rd Symposium on Foundations of Responsible Computing (FORC 2022)},
  pages =	{2:1--2:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-226-6},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{218},
  editor =	{Celis, L. Elisa},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2022.2},
  URN =		{urn:nbn:de:0030-drops-165258},
  doi =		{10.4230/LIPIcs.FORC.2022.2},
  annote =	{Keywords: fairness, cohort selection, leximin, maxmin}
}
Document
New Near-Linear Time Decodable Codes Closer to the GV Bound

Authors: Guy Blanc and Dean Doron

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
We construct a family of binary codes of relative distance 1/2-ε and rate ε² ⋅ 2^(-log^α (1/ε)) for α ≈ 1/2 that are decodable, probabilistically, in near-linear time. This improves upon the rate of the state-of-the-art near-linear time decoding near the GV bound due to Jeronimo, Srivastava, and Tulsiani, who gave a randomized decoding of Ta-Shma codes with α ≈ 5/6 [Ta-Shma, 2017; Jeronimo et al., 2021]. Each code in our family can be constructed in probabilistic polynomial time, or deterministic polynomial time given sufficiently good explicit 3-uniform hypergraphs. Our construction is based on a new graph-based bias amplification method. While previous works start with some base code of relative distance 1/2-ε₀ for ε₀ ≫ ε and amplify the distance to 1/2-ε by walking on an expander, or on a carefully tailored product of expanders, we walk over very sparse, highly mixing, hypergraphs. Study of such hypergraphs further offers an avenue toward achieving rate Ω̃(ε²). For our unique- and list-decoding algorithms, we employ the framework developed in [Jeronimo et al., 2021].

Cite as

Guy Blanc and Dean Doron. New Near-Linear Time Decodable Codes Closer to the GV Bound. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 10:1-10:40, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{blanc_et_al:LIPIcs.CCC.2022.10,
  author =	{Blanc, Guy and Doron, Dean},
  title =	{{New Near-Linear Time Decodable Codes Closer to the GV Bound}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{10:1--10:40},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.10},
  URN =		{urn:nbn:de:0030-drops-165726},
  doi =		{10.4230/LIPIcs.CCC.2022.10},
  annote =	{Keywords: Unique decoding, list decoding, the Gilbert-Varshamov bound, small-bias sample spaces, hypergraphs, expander walks}
}
Document
Pseudorandomness of Expander Random Walks for Symmetric Functions and Permutation Branching Programs

Authors: Louis Golowich and Salil Vadhan

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
We study the pseudorandomness of random walks on expander graphs against tests computed by symmetric functions and permutation branching programs. These questions are motivated by applications of expander walks in the coding theory and derandomization literatures. A line of prior work has shown that random walks on expanders with second largest eigenvalue λ fool symmetric functions up to a O(λ) error in total variation distance, but only for the case where the vertices are labeled with symbols from a binary alphabet, and with a suboptimal dependence on the bias of the labeling. We generalize these results to labelings with an arbitrary alphabet, and for the case of binary labelings we achieve an optimal dependence on the labeling bias. We extend our analysis to unify it with and strengthen the expander-walk Chernoff bound. We then show that expander walks fool permutation branching programs up to a O(λ) error in 𝓁₂-distance, and we prove that much stronger bounds hold for programs with a certain structure. We also prove lower bounds to show that our results are tight. To prove our results for symmetric functions, we analyze the Fourier coefficients of the relevant distributions using linear-algebraic techniques. Our analysis for permutation branching programs is likewise linear-algebraic in nature, but also makes use of the recently introduced singular-value approximation notion for matrices (Ahmadinejad et al. 2021).

Cite as

Louis Golowich and Salil Vadhan. Pseudorandomness of Expander Random Walks for Symmetric Functions and Permutation Branching Programs. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 27:1-27:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{golowich_et_al:LIPIcs.CCC.2022.27,
  author =	{Golowich, Louis and Vadhan, Salil},
  title =	{{Pseudorandomness of Expander Random Walks for Symmetric Functions and Permutation Branching Programs}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{27:1--27:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.27},
  URN =		{urn:nbn:de:0030-drops-165893},
  doi =		{10.4230/LIPIcs.CCC.2022.27},
  annote =	{Keywords: Expander graph, Random walk, Pseudorandomness}
}
Document
Omnipredictors

Authors: Parikshit Gopalan, Adam Tauman Kalai, Omer Reingold, Vatsal Sharan, and Udi Wieder

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
Loss minimization is a dominant paradigm in machine learning, where a predictor is trained to minimize some loss function that depends on an uncertain event (e.g., "will it rain tomorrow?"). Different loss functions imply different learning algorithms and, at times, very different predictors. While widespread and appealing, a clear drawback of this approach is that the loss function may not be known at the time of learning, requiring the algorithm to use a best-guess loss function. Alternatively, the same classifier may be used to inform multiple decisions, which correspond to multiple loss functions, requiring multiple learning algorithms to be run on the same data. We suggest a rigorous new paradigm for loss minimization in machine learning where the loss function can be ignored at the time of learning and only be taken into account when deciding an action. We introduce the notion of an (L,𝒞)-omnipredictor, which could be used to optimize any loss in a family L. Once the loss function is set, the outputs of the predictor can be post-processed (a simple univariate data-independent transformation of individual predictions) to do well compared with any hypothesis from the class C. The post processing is essentially what one would perform if the outputs of the predictor were true probabilities of the uncertain events. In a sense, omnipredictors extract all the predictive power from the class 𝒞, irrespective of the loss function in L. We show that such "loss-oblivious" learning is feasible through a connection to multicalibration, a notion introduced in the context of algorithmic fairness. A multicalibrated predictor doesn’t aim to minimize some loss function, but rather to make calibrated predictions, even when conditioned on inputs lying in certain sets c belonging to a family 𝒞 which is weakly learnable. We show that a 𝒞-multicalibrated predictor is also an (L,𝒞)-omnipredictor, where L contains all convex loss functions with some mild Lipschitz conditions. The predictors are even omnipredictors with respect to sparse linear combinations of functions in 𝒞. As a corollary, we deduce that distribution-specific weak agnostic learning is complete for a large class of loss minimization tasks. In addition, we show how multicalibration can be viewed as a solution concept for agnostic boosting, shedding new light on past results. Finally, we transfer our insights back to the context of algorithmic fairness by providing omnipredictors for multi-group loss minimization.

Cite as

Parikshit Gopalan, Adam Tauman Kalai, Omer Reingold, Vatsal Sharan, and Udi Wieder. Omnipredictors. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 79:1-79:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{gopalan_et_al:LIPIcs.ITCS.2022.79,
  author =	{Gopalan, Parikshit and Kalai, Adam Tauman and Reingold, Omer and Sharan, Vatsal and Wieder, Udi},
  title =	{{Omnipredictors}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{79:1--79:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.79},
  URN =		{urn:nbn:de:0030-drops-156755},
  doi =		{10.4230/LIPIcs.ITCS.2022.79},
  annote =	{Keywords: Loss-minimzation, multi-group fairness, agnostic learning, boosting}
}
Document
RANDOM
Fourier Growth of Structured 𝔽₂-Polynomials and Applications

Authors: Jarosław Błasiok, Peter Ivanov, Yaonan Jin, Chin Ho Lee, Rocco A. Servedio, and Emanuele Viola

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
We analyze the Fourier growth, i.e. the L₁ Fourier weight at level k (denoted L_{1,k}), of various well-studied classes of "structured" m F₂-polynomials. This study is motivated by applications in pseudorandomness, in particular recent results and conjectures due to [Chattopadhyay et al., 2019; Chattopadhyay et al., 2019; Eshan Chattopadhyay et al., 2020] which show that upper bounds on Fourier growth (even at level k = 2) give unconditional pseudorandom generators. Our main structural results on Fourier growth are as follows: - We show that any symmetric degree-d m F₂-polynomial p has L_{1,k}(p) ≤ Pr [p = 1] ⋅ O(d)^k. This quadratically strengthens an earlier bound that was implicit in [Omer Reingold et al., 2013]. - We show that any read-Δ degree-d m F₂-polynomial p has L_{1,k}(p) ≤ Pr [p = 1] ⋅ (k Δ d)^{O(k)}. - We establish a composition theorem which gives L_{1,k} bounds on disjoint compositions of functions that are closed under restrictions and admit L_{1,k} bounds. Finally, we apply the above structural results to obtain new unconditional pseudorandom generators and new correlation bounds for various classes of m F₂-polynomials.

Cite as

Jarosław Błasiok, Peter Ivanov, Yaonan Jin, Chin Ho Lee, Rocco A. Servedio, and Emanuele Viola. Fourier Growth of Structured 𝔽₂-Polynomials and Applications. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 53:1-53:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{blasiok_et_al:LIPIcs.APPROX/RANDOM.2021.53,
  author =	{B{\l}asiok, Jaros{\l}aw and Ivanov, Peter and Jin, Yaonan and Lee, Chin Ho and Servedio, Rocco A. and Viola, Emanuele},
  title =	{{Fourier Growth of Structured \mathbb{F}₂-Polynomials and Applications}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{53:1--53:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.53},
  URN =		{urn:nbn:de:0030-drops-147462},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.53},
  annote =	{Keywords: Fourier analysis, Pseudorandomness, Fourier growth}
}
Document
RANDOM
Pseudorandom Generators for Read-Once Monotone Branching Programs

Authors: Dean Doron, Raghu Meka, Omer Reingold, Avishay Tal, and Salil Vadhan

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
Motivated by the derandomization of space-bounded computation, there has been a long line of work on constructing pseudorandom generators (PRGs) against various forms of read-once branching programs (ROBPs), with a goal of improving the O(log² n) seed length of Nisan’s classic construction [Noam Nisan, 1992] to the optimal O(log n). In this work, we construct an explicit PRG with seed length Õ(log n) for constant-width ROBPs that are monotone, meaning that the states at each time step can be ordered so that edges with the same labels never cross each other. Equivalently, for each fixed input, the transition functions are a monotone function of the state. This result is complementary to a line of work that gave PRGs with seed length O(log n) for (ordered) permutation ROBPs of constant width [Braverman et al., 2014; Koucký et al., 2011; De, 2011; Thomas Steinke, 2012], since the monotonicity constraint can be seen as the "opposite" of the permutation constraint. Our PRG also works for monotone ROBPs that can read the input bits in any order, which are strictly more powerful than read-once AC⁰. Our PRG achieves better parameters (in terms of the dependence on the depth of the circuit) than the best previous pseudorandom generator for read-once AC⁰, due to Doron, Hatami, and Hoza [Doron et al., 2019]. Our pseudorandom generator construction follows Ajtai and Wigderson’s approach of iterated pseudorandom restrictions [Ajtai and Wigderson, 1989; Gopalan et al., 2012]. We give a randomness-efficient width-reduction process which proves that the branching program simplifies to an O(log n)-junta after only O(log log n) independent applications of the Forbes-Kelley pseudorandom restrictions [Michael A. Forbes and Zander Kelley, 2018].

Cite as

Dean Doron, Raghu Meka, Omer Reingold, Avishay Tal, and Salil Vadhan. Pseudorandom Generators for Read-Once Monotone Branching Programs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 58:1-58:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{doron_et_al:LIPIcs.APPROX/RANDOM.2021.58,
  author =	{Doron, Dean and Meka, Raghu and Reingold, Omer and Tal, Avishay and Vadhan, Salil},
  title =	{{Pseudorandom Generators for Read-Once Monotone Branching Programs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{58:1--58:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.58},
  URN =		{urn:nbn:de:0030-drops-147513},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.58},
  annote =	{Keywords: Branching programs, pseudorandom generators, constant depth circuits}
}
Document
Service in Your Neighborhood: Fairness in Center Location

Authors: Christopher Jung, Sampath Kannan, and Neil Lutz

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
When selecting locations for a set of centers, standard clustering algorithms may place unfair burden on some individuals and neighborhoods. We formulate a fairness concept that takes local population densities into account. In particular, given k centers to locate and a population of size n, we define the "neighborhood radius" of an individual i as the minimum radius of a ball centered at i that contains at least n/k individuals. Our objective is to ensure that each individual has a center that is within at most a small constant factor of her neighborhood radius. We present several theoretical results: We show that optimizing this factor is NP-hard; we give an approximation algorithm that guarantees a factor of at most 2 in all metric spaces; and we prove matching lower bounds in some metric spaces. We apply a variant of this algorithm to real-world address data, showing that it is quite different from standard clustering algorithms and outperforms them on our objective function and balances the load between centers more evenly.

Cite as

Christopher Jung, Sampath Kannan, and Neil Lutz. Service in Your Neighborhood: Fairness in Center Location. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 5:1-5:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{jung_et_al:LIPIcs.FORC.2020.5,
  author =	{Jung, Christopher and Kannan, Sampath and Lutz, Neil},
  title =	{{Service in Your Neighborhood: Fairness in Center Location}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{5:1--5:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020.5},
  URN =		{urn:nbn:de:0030-drops-120215},
  doi =		{10.4230/LIPIcs.FORC.2020.5},
  annote =	{Keywords: Fairness, Clustering, Facility Location}
}
Document
Bounded-Leakage Differential Privacy

Authors: Katrina Ligett, Charlotte Peale, and Omer Reingold

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
We introduce and study a relaxation of differential privacy [Dwork et al., 2006] that accounts for mechanisms that leak some additional, bounded information about the database. We apply this notion to reason about two distinct settings where the notion of differential privacy is of limited use. First, we consider cases, such as in the 2020 US Census [Abowd, 2018], in which some information about the database is released exactly or with small noise. Second, we consider the accumulation of privacy harms for an individual across studies that may not even include the data of this individual. The tools that we develop for bounded-leakage differential privacy allow us reason about privacy loss in these settings, and to show that individuals preserve some meaningful protections.

Cite as

Katrina Ligett, Charlotte Peale, and Omer Reingold. Bounded-Leakage Differential Privacy. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ligett_et_al:LIPIcs.FORC.2020.10,
  author =	{Ligett, Katrina and Peale, Charlotte and Reingold, Omer},
  title =	{{Bounded-Leakage Differential Privacy}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020.10},
  URN =		{urn:nbn:de:0030-drops-120265},
  doi =		{10.4230/LIPIcs.FORC.2020.10},
  annote =	{Keywords: differential privacy, applications, privacy, leakage, auxiliary information}
}
  • Refine by Author
  • 12 Reingold, Omer
  • 4 Peale, Charlotte
  • 3 Hu, Lunjia
  • 3 Kim, Michael P.
  • 3 Rothblum, Guy N.
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 2 Pseudorandomness
  • 2 algorithmic fairness
  • 2 pseudorandomness
  • 1 Affirmative Action
  • 1 Algorithmic Fairness
  • Show More...

  • Refine by Type
  • 21 document

  • Refine by Publication Year
  • 7 2023
  • 4 2020
  • 4 2022
  • 2 2019
  • 2 2021
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail