22 Search Results for "Dwork, Cynthia"


Document
From the Real Towards the Ideal: Risk Prediction in a Better World

Authors: Cynthia Dwork, Omer Reingold, and Guy N. Rothblum

Published in: LIPIcs, Volume 256, 4th Symposium on Foundations of Responsible Computing (FORC 2023)


Abstract
Prediction algorithms assign scores in [0,1] to individuals, often interpreted as "probabilities" of a positive outcome, for example, of repaying a loan or succeeding in a job. Success, however, rarely depends only on the individual: it is a function of the individual’s interaction with the environment, past and present. Environments do not treat all demographic groups equally. We initiate the study of corrective transformations τ that map predictors of success in the real world to predictors in a better world. In the language of algorithmic fairness, letting p^* denote the true probabilities of success in the real, unfair, world, we characterize the transformations τ for which it is feasible to find a predictor q̃ that is indistinguishable from τ(p^*). The problem is challenging because we do not have access to probabilities or even outcomes in a better world. Nor do we have access to probabilities p^* in the real world. The only data available for training are outcomes from the real world. We obtain a complete characterization of when it is possible to learn predictors that are indistinguishable from τ(p^*), in the form of a simple-to-state criterion describing necessary and sufficient conditions for doing so. This criterion is inextricably bound with the very existence of uncertainty.

Cite as

Cynthia Dwork, Omer Reingold, and Guy N. Rothblum. From the Real Towards the Ideal: Risk Prediction in a Better World. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 1:1-1:17, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{dwork_et_al:LIPIcs.FORC.2023.1,
  author =	{Dwork, Cynthia and Reingold, Omer and Rothblum, Guy N.},
  title =	{{From the Real Towards the Ideal: Risk Prediction in a Better World}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{1:1--1:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.1},
  URN =		{urn:nbn:de:0030-drops-179224},
  doi =		{10.4230/LIPIcs.FORC.2023.1},
  annote =	{Keywords: Algorithmic Fairness, Affirmative Action, Learning, Predictions, Multicalibration, Outcome Indistinguishability}
}
Document
HappyMap : A Generalized Multicalibration Method

Authors: Zhun Deng, Cynthia Dwork, and Linjun Zhang

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
Multicalibration is a powerful and evolving concept originating in the field of algorithmic fairness. For a predictor f that estimates the outcome y given covariates x, and for a function class C, multi-calibration requires that the predictor f(x) and outcome y are indistinguishable under the class of auditors in C. Fairness is captured by incorporating demographic subgroups into the class of functions C. Recent work has shown that, by enriching the class C to incorporate appropriate propensity re-weighting functions, multi-calibration also yields target-independent learning, wherein a model trained on a source domain performs well on unseen, future, target domains {(approximately) captured by the re-weightings.} Formally, multicalibration with respect to C bounds |𝔼_{(x,y)∼D}[c(f(x),x)⋅(f(x)-y)]| for all c ∈ C. In this work, we view the term (f(x)-y) as just one specific mapping, and explore the power of an enriched class of mappings. We propose s-Happy Multicalibration, a generalization of multi-calibration, which yields a wide range of new applications, including a new fairness notion for uncertainty quantification, a novel technique for conformal prediction under covariate shift, and a different approach to analyzing missing data, while also yielding a unified understanding of several existing seemingly disparate algorithmic fairness notions and target-independent learning approaches. We give a single HappyMap meta-algorithm that captures all these results, together with a sufficiency condition for its success.

Cite as

Zhun Deng, Cynthia Dwork, and Linjun Zhang. HappyMap : A Generalized Multicalibration Method. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 41:1-41:23, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{deng_et_al:LIPIcs.ITCS.2023.41,
  author =	{Deng, Zhun and Dwork, Cynthia and Zhang, Linjun},
  title =	{{HappyMap : A Generalized Multicalibration Method}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{41:1--41:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.41},
  URN =		{urn:nbn:de:0030-drops-175449},
  doi =		{10.4230/LIPIcs.ITCS.2023.41},
  annote =	{Keywords: algorithmic fairness, target-independent learning, transfer learning}
}
Document
Making Decisions Under Outcome Performativity

Authors: Michael P. Kim and Juan C. Perdomo

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
Decision-makers often act in response to data-driven predictions, with the goal of achieving favorable outcomes. In such settings, predictions don’t passively forecast the future; instead, predictions actively shape the distribution of outcomes they are meant to predict. This performative prediction setting [Brown et al., 2022] raises new challenges for learning "optimal" decision rules. In particular, existing solution concepts do not address the apparent tension between the goals of forecasting outcomes accurately and steering individuals to achieve desirable outcomes. To contend with this concern, we introduce a new optimality concept - performative omniprediction - adapted from the supervised (non-performative) learning setting [Gopalan et al., 2022]. A performative omnipredictor is a single predictor that simultaneously encodes the optimal decision rule with respect to many possibly-competing objectives. Our main result demonstrates that efficient performative omnipredictors exist, under a natural restriction of performative prediction, which we call outcome performativity. On a technical level, our results follow by carefully generalizing the notion of outcome indistinguishability [Cynthia Dwork et al., 2021] to the outcome performative setting. From an appropriate notion of Performative OI, we recover many consequences known to hold in the supervised setting, such as omniprediction and universal adaptability [Kim et al., 2022].

Cite as

Michael P. Kim and Juan C. Perdomo. Making Decisions Under Outcome Performativity. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 79:1-79:15, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.ITCS.2023.79,
  author =	{Kim, Michael P. and Perdomo, Juan C.},
  title =	{{Making Decisions Under Outcome Performativity}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{79:1--79:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.79},
  URN =		{urn:nbn:de:0030-drops-175824},
  doi =		{10.4230/LIPIcs.ITCS.2023.79},
  annote =	{Keywords: performative prediction, outcome indistinguishability}
}
Document
Polynomial-Time Verification and Testing of Implementations of the Snapshot Data Structure

Authors: Gal Amram, Avi Hayoun, Lior Mizrahi, and Gera Weiss

Published in: LIPIcs, Volume 246, 36th International Symposium on Distributed Computing (DISC 2022)


Abstract
We analyze correctness of implementations of the snapshot data structure in terms of linearizability. We show that such implementations can be verified in polynomial time. Additionally, we identify a set of representative executions for testing and show that the correctness of each of these executions can be validated in linear time. These results present a significant speedup considering that verifying linearizability of implementations of concurrent data structures, in general, is EXPSPACE-complete in the number of program-states, and testing linearizability is NP-complete in the length of the tested execution. The crux of our approach is identifying a class of executions, which we call simple, such that a snapshot implementation is linearizable if and only if all of its simple executions are linearizable. We then divide all possible non-linearizable simple executions into three categories and construct a small automaton that recognizes each category. We describe two implementations (one for verification and one for testing) of an automata-based approach that we develop based on this result and an evaluation that demonstrates significant improvements over existing tools. For verification, we show that restricting a state-of-the-art tool to analyzing only simple executions saves resources and allows the analysis of more complex cases. Specifically, restricting attention to simple executions finds bugs in 27 instances, whereas, without this restriction, we were only able to find 14 of the 30 bugs in the instances we examined. We also show that our technique accelerates testing performance significantly. Specifically, our implementation solves the complete set of 900 problems we generated, whereas the state-of-the-art linearizability testing tool solves only 554 problems.

Cite as

Gal Amram, Avi Hayoun, Lior Mizrahi, and Gera Weiss. Polynomial-Time Verification and Testing of Implementations of the Snapshot Data Structure. In 36th International Symposium on Distributed Computing (DISC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 246, pp. 5:1-5:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{amram_et_al:LIPIcs.DISC.2022.5,
  author =	{Amram, Gal and Hayoun, Avi and Mizrahi, Lior and Weiss, Gera},
  title =	{{Polynomial-Time Verification and Testing of Implementations of the Snapshot Data Structure}},
  booktitle =	{36th International Symposium on Distributed Computing (DISC 2022)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-255-6},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{246},
  editor =	{Scheideler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2022.5},
  URN =		{urn:nbn:de:0030-drops-171964},
  doi =		{10.4230/LIPIcs.DISC.2022.5},
  annote =	{Keywords: Snapshot, Linearizability, Verification, Formal Methods}
}
Document
Improved Generalization Guarantees in Restricted Data Models

Authors: Elbert Du and Cynthia Dwork

Published in: LIPIcs, Volume 218, 3rd Symposium on Foundations of Responsible Computing (FORC 2022)


Abstract
Differential privacy is known to protect against threats to validity incurred due to adaptive, or exploratory, data analysis - even when the analyst adversarially searches for a statistical estimate that diverges from the true value of the quantity of interest on the underlying population. The cost of this protection is the accuracy loss incurred by differential privacy. In this work, inspired by standard models in the genomics literature, we consider data models in which individuals are represented by a sequence of attributes with the property that where distant attributes are only weakly correlated. We show that, under this assumption, it is possible to "re-use" privacy budget on different portions of the data, significantly improving accuracy without increasing the risk of overfitting.

Cite as

Elbert Du and Cynthia Dwork. Improved Generalization Guarantees in Restricted Data Models. In 3rd Symposium on Foundations of Responsible Computing (FORC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 218, pp. 6:1-6:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{du_et_al:LIPIcs.FORC.2022.6,
  author =	{Du, Elbert and Dwork, Cynthia},
  title =	{{Improved Generalization Guarantees in Restricted Data Models}},
  booktitle =	{3rd Symposium on Foundations of Responsible Computing (FORC 2022)},
  pages =	{6:1--6:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-226-6},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{218},
  editor =	{Celis, L. Elisa},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2022.6},
  URN =		{urn:nbn:de:0030-drops-165299},
  doi =		{10.4230/LIPIcs.FORC.2022.6},
  annote =	{Keywords: Differential Privacy, Adaptive Data Analysis, Transfer Theorem}
}
Document
Metric Learning for Individual Fairness

Authors: Christina Ilvento

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
There has been much discussion concerning how "fairness" should be measured or enforced in classification. Individual Fairness [Dwork et al., 2012], which requires that similar individuals be treated similarly, is a highly appealing definition as it gives strong treatment guarantees for individuals. Unfortunately, the need for a task-specific similarity metric has prevented its use in practice. In this work, we propose a solution to the problem of approximating a metric for Individual Fairness based on human judgments. Our model assumes access to a human fairness arbiter who is free of explicit biases and possesses sufficient domain knowledge to evaluate similarity. Our contributions include definitions for metric approximation relevant for Individual Fairness, constructions for approximations from a limited number of realistic queries to the arbiter on a sample of individuals, and learning procedures to construct hypotheses for metric approximations which generalize to unseen samples under certain assumptions of learnability of distance threshold functions.

Cite as

Christina Ilvento. Metric Learning for Individual Fairness. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 2:1-2:11, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ilvento:LIPIcs.FORC.2020.2,
  author =	{Ilvento, Christina},
  title =	{{Metric Learning for Individual Fairness}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{2:1--2:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020.2},
  URN =		{urn:nbn:de:0030-drops-120183},
  doi =		{10.4230/LIPIcs.FORC.2020.2},
  annote =	{Keywords: metric learning, individual fairness, fair machine learning}
}
Document
Individual Fairness in Pipelines

Authors: Cynthia Dwork, Christina Ilvento, and Meena Jagadeesan

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
It is well understood that a system built from individually fair components may not itself be individually fair. In this work, we investigate individual fairness under pipeline composition. Pipelines differ from ordinary sequential or repeated composition in that individuals may drop out at any stage, and classification in subsequent stages may depend on the remaining "cohort" of individuals. As an example, a company might hire a team for a new project and at a later point promote the highest performer on the team. Unlike other repeated classification settings, where the degree of unfairness degrades gracefully over multiple fair steps, the degree of unfairness in pipelines can be arbitrary, even in a pipeline with just two stages. Guided by a panoply of real-world examples, we provide a rigorous framework for evaluating different types of fairness guarantees for pipelines. We show that naïve auditing is unable to uncover systematic unfairness and that, in order to ensure fairness, some form of dependence must exist between the design of algorithms at different stages in the pipeline. Finally, we provide constructions that permit flexibility at later stages, meaning that there is no need to lock in the entire pipeline at the time that the early stage is constructed.

Cite as

Cynthia Dwork, Christina Ilvento, and Meena Jagadeesan. Individual Fairness in Pipelines. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 7:1-7:22, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dwork_et_al:LIPIcs.FORC.2020.7,
  author =	{Dwork, Cynthia and Ilvento, Christina and Jagadeesan, Meena},
  title =	{{Individual Fairness in Pipelines}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{7:1--7:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020.7},
  URN =		{urn:nbn:de:0030-drops-120235},
  doi =		{10.4230/LIPIcs.FORC.2020.7},
  annote =	{Keywords: algorithmic fairness, fairness under composition, pipelines}
}
Document
Abstracting Fairness: Oracles, Metrics, and Interpretability

Authors: Cynthia Dwork, Christina Ilvento, Guy N. Rothblum, and Pragya Sur

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
It is well understood that classification algorithms, for example, for deciding on loan applications, cannot be evaluated for fairness without taking context into account. We examine what can be learned from a fairness oracle equipped with an underlying understanding of "true" fairness. The oracle takes as input a (context, classifier) pair satisfying an arbitrary fairness definition, and accepts or rejects the pair according to whether the classifier satisfies the underlying fairness truth. Our principal conceptual result is an extraction procedure that learns the underlying truth; moreover, the procedure can learn an approximation to this truth given access to a weak form of the oracle. Since every "truly fair" classifier induces a coarse metric, in which those receiving the same decision are at distance zero from one another and those receiving different decisions are at distance one, this extraction process provides the basis for ensuring a rough form of metric fairness, also known as individual fairness. Our principal technical result is a higher fidelity extractor under a mild technical constraint on the weak oracle’s conception of fairness. Our framework permits the scenario in which many classifiers, with differing outcomes, may all be considered fair. Our results have implications for interpretablity - a highly desired but poorly defined property of classification systems that endeavors to permit a human arbiter to reject classifiers deemed to be "unfair" or illegitimately derived.

Cite as

Cynthia Dwork, Christina Ilvento, Guy N. Rothblum, and Pragya Sur. Abstracting Fairness: Oracles, Metrics, and Interpretability. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 8:1-8:16, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dwork_et_al:LIPIcs.FORC.2020.8,
  author =	{Dwork, Cynthia and Ilvento, Christina and Rothblum, Guy N. and Sur, Pragya},
  title =	{{Abstracting Fairness: Oracles, Metrics, and Interpretability}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{8:1--8:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020.8},
  URN =		{urn:nbn:de:0030-drops-120247},
  doi =		{10.4230/LIPIcs.FORC.2020.8},
  annote =	{Keywords: Algorithmic fairness, fairness definitions, causality-based fairness, interpretability, individual fairness, metric fairness}
}
Document
Asymmetric Distances for Approximate Differential Privacy

Authors: Dmitry Chistikov, Andrzej S. Murawski, and David Purser

Published in: LIPIcs, Volume 140, 30th International Conference on Concurrency Theory (CONCUR 2019)


Abstract
Differential privacy is a widely studied notion of privacy for various models of computation, based on measuring differences between probability distributions. We consider (epsilon,delta)-differential privacy in the setting of labelled Markov chains. For a given epsilon, the parameter delta can be captured by a variant of the total variation distance, which we call lv_{alpha} (where alpha = e^{epsilon}). First we study lv_{alpha} directly, showing that it cannot be computed exactly. However, the associated approximation problem turns out to be in PSPACE and #P-hard. Next we introduce a new bisimilarity distance for bounding lv_{alpha} from above, which provides a tighter bound than previously known distances while remaining computable with the same complexity (polynomial time with an NP oracle). We also propose an alternative bound that can be computed in polynomial time. Finally, we illustrate the distances on case studies.

Cite as

Dmitry Chistikov, Andrzej S. Murawski, and David Purser. Asymmetric Distances for Approximate Differential Privacy. In 30th International Conference on Concurrency Theory (CONCUR 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 140, pp. 10:1-10:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chistikov_et_al:LIPIcs.CONCUR.2019.10,
  author =	{Chistikov, Dmitry and Murawski, Andrzej S. and Purser, David},
  title =	{{Asymmetric Distances for Approximate Differential Privacy}},
  booktitle =	{30th International Conference on Concurrency Theory (CONCUR 2019)},
  pages =	{10:1--10:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-121-4},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{140},
  editor =	{Fokkink, Wan and van Glabbeek, Rob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2019.10},
  URN =		{urn:nbn:de:0030-drops-109121},
  doi =		{10.4230/LIPIcs.CONCUR.2019.10},
  annote =	{Keywords: Bisimilarity distances, Differential privacy, Labelled Markov chains}
}
Document
Nullstellensatz Size-Degree Trade-offs from Reversible Pebbling

Authors: Susanna F. de Rezende, Jakob Nordström, Or Meir, and Robert Robere

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
We establish an exactly tight relation between reversible pebblings of graphs and Nullstellensatz refutations of pebbling formulas, showing that a graph G can be reversibly pebbled in time t and space s if and only if there is a Nullstellensatz refutation of the pebbling formula over G in size t+1 and degree s (independently of the field in which the Nullstellensatz refutation is made). We use this correspondence to prove a number of strong size-degree trade-offs for Nullstellensatz, which to the best of our knowledge are the first such results for this proof system.

Cite as

Susanna F. de Rezende, Jakob Nordström, Or Meir, and Robert Robere. Nullstellensatz Size-Degree Trade-offs from Reversible Pebbling. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 18:1-18:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{derezende_et_al:LIPIcs.CCC.2019.18,
  author =	{de Rezende, Susanna F. and Nordstr\"{o}m, Jakob and Meir, Or and Robere, Robert},
  title =	{{Nullstellensatz Size-Degree Trade-offs from Reversible Pebbling}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{18:1--18:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.18},
  URN =		{urn:nbn:de:0030-drops-108403},
  doi =		{10.4230/LIPIcs.CCC.2019.18},
  annote =	{Keywords: proof complexity, Nullstellensatz, pebble games, trade-offs, size, degree}
}
Document
Fairness Under Composition

Authors: Cynthia Dwork and Christina Ilvento

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
Algorithmic fairness, and in particular the fairness of scoring and classification algorithms, has become a topic of increasing social concern and has recently witnessed an explosion of research in theoretical computer science, machine learning, statistics, the social sciences, and law. Much of the literature considers the case of a single classifier (or scoring function) used once, in isolation. In this work, we initiate the study of the fairness properties of systems composed of algorithms that are fair in isolation; that is, we study fairness under composition. We identify pitfalls of naïve composition and give general constructions for fair composition, demonstrating both that classifiers that are fair in isolation do not necessarily compose into fair systems and also that seemingly unfair components may be carefully combined to construct fair systems. We focus primarily on the individual fairness setting proposed in [Dwork, Hardt, Pitassi, Reingold, Zemel, 2011], but also extend our results to a large class of group fairness definitions popular in the recent literature, exhibiting several cases in which group fairness definitions give misleading signals under composition.

Cite as

Cynthia Dwork and Christina Ilvento. Fairness Under Composition. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 33:1-33:20, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dwork_et_al:LIPIcs.ITCS.2019.33,
  author =	{Dwork, Cynthia and Ilvento, Christina},
  title =	{{Fairness Under Composition}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{33:1--33:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.33},
  URN =		{urn:nbn:de:0030-drops-101269},
  doi =		{10.4230/LIPIcs.ITCS.2019.33},
  annote =	{Keywords: algorithmic fairness, fairness, fairness under composition}
}
Document
Making Existential-unforgeable Signatures Strongly Unforgeable in the Quantum Random-oracle Model

Authors: Edward Eaton and Fang Song

Published in: LIPIcs, Volume 44, 10th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2015)


Abstract
Strongly unforgeable signature schemes provide a more stringent security guarantee than the standard existential unforgeability. It requires that not only forging a signature on a new message is hard, it is infeasible as well to produce a new signature on a message for which the adversary has seen valid signatures before. Strongly unforgeable signatures are useful both in practice and as a building block in many cryptographic constructions. This work investigates a generic transformation that compiles any existential-unforgeable scheme into a strongly unforgeable one, which was proposed by Teranishi et al. [Teranishi/Oyama/Ogata, Cryptology-Indocrypt 2006] and was proven in the classical random-oracle model. Our main contribution is showing that the transformation also works against quantum adversaries in the quantum random-oracle model. We develop proof techniques such as adaptively programming a quantum random-oracle in a new setting, which could be of independent interest. Applying the transformation to an existential-unforgeable signature scheme due to Cash et al. [Cash/Hofheinz/Kiltz/Peikert, J. of Cryptology 2012], which can be shown to be quantum-secure assuming certain lattice problems are hard for quantum computers, we get an efficient quantum-secure strongly unforgeable signature scheme in the quantum random-oracle model.

Cite as

Edward Eaton and Fang Song. Making Existential-unforgeable Signatures Strongly Unforgeable in the Quantum Random-oracle Model. In 10th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 44, pp. 147-162, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{eaton_et_al:LIPIcs.TQC.2015.147,
  author =	{Eaton, Edward and Song, Fang},
  title =	{{Making Existential-unforgeable Signatures Strongly Unforgeable in the Quantum Random-oracle Model}},
  booktitle =	{10th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2015)},
  pages =	{147--162},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-96-5},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{44},
  editor =	{Beigi, Salman and K\"{o}nig, Robert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2015.147},
  URN =		{urn:nbn:de:0030-drops-55540},
  doi =		{10.4230/LIPIcs.TQC.2015.147},
  annote =	{Keywords: digital signatures, strongly unforgeable, quantum random-oracle, lattices}
}
Document
Spectral Norm of Random Kernel Matrices with Applications to Privacy

Authors: Shiva Prasad Kasiviswanathan and Mark Rudelson

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
Kernel methods are an extremely popular set of techniques used for many important machine learning and data analysis applications. In addition to having good practical performance, these methods are supported by a well-developed theory. Kernel methods use an implicit mapping of the input data into a high dimensional feature space defined by a kernel function, i.e., a function returning the inner product between the images of two data points in the feature space. Central to any kernel method is the kernel matrix, which is built by evaluating the kernel function on a given sample dataset. In this paper, we initiate the study of non-asymptotic spectral properties of random kernel matrices. These are n x n random matrices whose (i,j)th entry is obtained by evaluating the kernel function on x_i and x_j, where x_1,..,x_n are a set of n independent random high-dimensional vectors. Our main contribution is to obtain tight upper bounds on the spectral norm (largest eigenvalue) of random kernel matrices constructed by using common kernel functions such as polynomials and Gaussian radial basis. As an application of these results, we provide lower bounds on the distortion needed for releasing the coefficients of kernel ridge regression under attribute privacy, a general privacy notion which captures a large class of privacy definitions. Kernel ridge regression is standard method for performing non-parametric regression that regularly outperforms traditional regression approaches in various domains. Our privacy distortion lower bounds are the first for any kernel technique, and our analysis assumes realistic scenarios for the input, unlike all previous lower bounds for other release problems which only hold under very restrictive input settings.

Cite as

Shiva Prasad Kasiviswanathan and Mark Rudelson. Spectral Norm of Random Kernel Matrices with Applications to Privacy. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 898-914, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{kasiviswanathan_et_al:LIPIcs.APPROX-RANDOM.2015.898,
  author =	{Kasiviswanathan, Shiva Prasad and Rudelson, Mark},
  title =	{{Spectral Norm of Random Kernel Matrices with Applications to Privacy}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{898--914},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.898},
  URN =		{urn:nbn:de:0030-drops-53435},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.898},
  annote =	{Keywords: Random Kernel Matrices, Spectral Norm, Subguassian Distribution, Data Privacy, Reconstruction Attacks}
}
Document
Combinatorial Discrepancy for Boxes via the gamma_2 Norm

Authors: Jirí Matoušek and Aleksandar Nikolov

Published in: LIPIcs, Volume 34, 31st International Symposium on Computational Geometry (SoCG 2015)


Abstract
The gamma_2 norm of a real m by n matrix A is the minimum number t such that the column vectors of A are contained in a 0-centered ellipsoid E that in turn is contained in the hypercube [-t, t]^m. This classical quantity is polynomial-time computable and was proved by the second author and Talwar to approximate the hereditary discrepancy: it bounds the hereditary discrepancy from above and from below, up to logarithmic factors. Here we provided a simplified proof of the upper bound and show that both the upper and the lower bound are asymptotically tight in the worst case. We then demonstrate on several examples the power of the gamma_2 norm as a tool for proving lower and upper bounds in discrepancy theory. Most notably, we prove a new lower bound of log(n)^(d-1) (up to constant factors) for the d-dimensional Tusnady problem, asking for the combinatorial discrepancy of an n-point set in d-dimensional space with respect to axis-parallel boxes. For d>2, this improves the previous best lower bound, which was of order approximately log(n)^((d-1)/2), and it comes close to the best known upper bound of O(log(n)^(d+1/2)), for which we also obtain a new, very simple proof. Applications to lower bounds for dynamic range searching and lower bounds in differential privacy are given.

Cite as

Jirí Matoušek and Aleksandar Nikolov. Combinatorial Discrepancy for Boxes via the gamma_2 Norm. In 31st International Symposium on Computational Geometry (SoCG 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 34, pp. 1-15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{matousek_et_al:LIPIcs.SOCG.2015.1,
  author =	{Matou\v{s}ek, Jir{\'\i} and Nikolov, Aleksandar},
  title =	{{Combinatorial Discrepancy for Boxes via the gamma\underline2 Norm}},
  booktitle =	{31st International Symposium on Computational Geometry (SoCG 2015)},
  pages =	{1--15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-83-5},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{34},
  editor =	{Arge, Lars and Pach, J\'{a}nos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SOCG.2015.1},
  URN =		{urn:nbn:de:0030-drops-51091},
  doi =		{10.4230/LIPIcs.SOCG.2015.1},
  annote =	{Keywords: discrepancy theory, range counting, factorization norms}
}
Document
Upper Bounds on Quantum Query Complexity Inspired by the Elitzur-Vaidman Bomb Tester

Authors: Cedric Yen-Yu Lin and Han-Hsuan Lin

Published in: LIPIcs, Volume 33, 30th Conference on Computational Complexity (CCC 2015)


Abstract
Inspired by the Elitzur-Vaidman bomb testing problem [Elitzur/Vaidman 1993], we introduce a new query complexity model, which we call bomb query complexity B(f). We investigate its relationship with the usual quantum query complexity Q(f), and show that B(f)=Theta(Q(f)^2). This result gives a new method to upper bound the quantum query complexity: we give a method of finding bomb query algorithms from classical algorithms, which then provide nonconstructive upper bounds on Q(f)=Theta(sqrt(B(f))). We subsequently were able to give explicit quantum algorithms matching our upper bound method. We apply this method on the single-source shortest paths problem on unweighted graphs, obtaining an algorithm with O(n^(1.5)) quantum query complexity, improving the best known algorithm of O(n^(1.5) * sqrt(log(n))) [Furrow, 2008]. Applying this method to the maximum bipartite matching problem gives an O(n^(1.75)) algorithm, improving the best known trivial O(n^2) upper bound.

Cite as

Cedric Yen-Yu Lin and Han-Hsuan Lin. Upper Bounds on Quantum Query Complexity Inspired by the Elitzur-Vaidman Bomb Tester. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 537-566, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{lin_et_al:LIPIcs.CCC.2015.537,
  author =	{Lin, Cedric Yen-Yu and Lin, Han-Hsuan},
  title =	{{Upper Bounds on Quantum Query Complexity Inspired by the Elitzur-Vaidman Bomb Tester}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{537--566},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.537},
  URN =		{urn:nbn:de:0030-drops-50635},
  doi =		{10.4230/LIPIcs.CCC.2015.537},
  annote =	{Keywords: Quantum Algorithms, Query Complexity, Elitzur-Vaidman Bomb Tester, Adversary Method, Maximum Bipartite Matching}
}
  • Refine by Author
  • 7 Dwork, Cynthia
  • 4 Ilvento, Christina
  • 2 Rothblum, Guy N.
  • 1 Amram, Gal
  • 1 Basavaraju, Manu
  • Show More...

  • Refine by Classification
  • 4 Theory of computation → Machine learning theory
  • 4 Theory of computation → Theory and algorithms for application domains
  • 3 Theory of computation → Design and analysis of algorithms
  • 1 Software and its engineering → Formal software verification
  • 1 Theory of computation → Computational complexity and cryptography
  • Show More...

  • Refine by Keyword
  • 3 algorithmic fairness
  • 2 Verification
  • 2 fairness under composition
  • 2 individual fairness
  • 1 Adaptive Data Analysis
  • Show More...

  • Refine by Type
  • 22 document

  • Refine by Publication Year
  • 7 2015
  • 3 2014
  • 3 2019
  • 3 2020
  • 3 2023
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail