LIPIcs, Volume 329

6th Symposium on Foundations of Responsible Computing (FORC 2025)



Thumbnail PDF

Event

FORC 2025, June 4-6, 2025, Stanford University, CA, USA

Editor

Mark Bun
  • Boston University, MA, USA

Publication Details

  • published at: 2025-06-03
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-367-6
  • DBLP: db/conf/forc/forc2025

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 329, FORC 2025, Complete Volume

Authors: Mark Bun


Abstract
LIPIcs, Volume 329, FORC 2025, Complete Volume

Cite as

6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 1-466, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Proceedings{bun:LIPIcs.FORC.2025,
  title =	{{LIPIcs, Volume 329, FORC 2025, Complete Volume}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{1--466},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025},
  URN =		{urn:nbn:de:0030-drops-232678},
  doi =		{10.4230/LIPIcs.FORC.2025},
  annote =	{Keywords: LIPIcs, Volume 329, FORC 2025, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Mark Bun


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 0:i-0:x, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bun:LIPIcs.FORC.2025.0,
  author =	{Bun, Mark},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{0:i--0:x},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.0},
  URN =		{urn:nbn:de:0030-drops-232664},
  doi =		{10.4230/LIPIcs.FORC.2025.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Privacy-Computation Trade-Offs in Private Repetition and Metaselection

Authors: Kunal Talwar


Abstract
A Private Repetition algorithm takes as input a differentially private algorithm with constant success probability and boosts it to one that succeeds with high probability. These algorithms are closely related to private metaselection algorithms that compete with the best of many private algorithms, and private hyperparameter tuning algorithms that compete with the best hyperparameter settings for a private learning algorithm. Existing algorithms for these tasks pay either a large overhead in privacy cost, or a large overhead in computational cost. In this work, we show strong lower bounds for problems of this kind, showing in particular that for any algorithm that preserves the privacy cost up to a constant factor, the failure probability can only fall polynomially in the computational overhead. This is in stark contrast with the non-private setting, where the failure probability falls exponentially in the computational overhead. By carefully combining existing algorithms for metaselection, we prove computation-privacy tradeoffs that nearly match our lower bounds.

Cite as

Kunal Talwar. Privacy-Computation Trade-Offs in Private Repetition and Metaselection. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 1:1-1:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{talwar:LIPIcs.FORC.2025.1,
  author =	{Talwar, Kunal},
  title =	{{Privacy-Computation Trade-Offs in Private Repetition and Metaselection}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{1:1--1:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.1},
  URN =		{urn:nbn:de:0030-drops-231282},
  doi =		{10.4230/LIPIcs.FORC.2025.1},
  annote =	{Keywords: Differential Privacy, Hyperparameter Tuning, Metaselection}
}
Document
Smooth Sensitivity Revisited: Towards Optimality

Authors: Richard Hladík and Jakub Tětek


Abstract
Smooth sensitivity is one of the most commonly used techniques for designing practical differentially private mechanisms. In this approach, one computes the smooth sensitivity of a given query q on the given input D and releases q(D) with noise added proportional to this smooth sensitivity. One question remains: what distribution should we pick the noise from? In this paper, we give a new class of distributions suitable for the use with smooth sensitivity, which we name the PolyPlace distribution. This distribution improves upon the state-of-the-art Student’s T distribution in terms of standard deviation by arbitrarily large factors, depending on a "smoothness parameter" γ, which one has to set in the smooth sensitivity framework. Moreover, our distribution is defined for a wider range of parameter γ, which can lead to significantly better performance. Furthermore, we prove that the PolyPlace distribution converges for γ → 0 to the Laplace distribution and so does its variance. This means that the Laplace mechanism is a limit special case of the PolyPlace mechanism. This implies that our mechanism is in a certain sense optimal for γ → 0.

Cite as

Richard Hladík and Jakub Tětek. Smooth Sensitivity Revisited: Towards Optimality. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 2:1-2:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hladik_et_al:LIPIcs.FORC.2025.2,
  author =	{Hlad{\'\i}k, Richard and T\v{e}tek, Jakub},
  title =	{{Smooth Sensitivity Revisited: Towards Optimality}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{2:1--2:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.2},
  URN =		{urn:nbn:de:0030-drops-231292},
  doi =		{10.4230/LIPIcs.FORC.2025.2},
  annote =	{Keywords: differential privacy, smooth sensitivity}
}
Document
Private Estimation When Data and Privacy Demands Are Correlated

Authors: Syomantak Chaudhuri and Thomas A. Courtade


Abstract
Differential Privacy (DP) is the current gold-standard for ensuring privacy for statistical queries. Estimation problems under DP constraints appearing in the literature have largely focused on providing equal privacy to all users. We consider the problems of empirical mean estimation for univariate data and frequency estimation for categorical data, both subject to heterogeneous privacy constraints. Each user, contributing a sample to the dataset, is allowed to have a different privacy demand. The dataset itself is assumed to be worst-case and we study both problems under two different formulations - first, where privacy demands and data may be correlated, and second, where correlations are weakened by random permutation of the dataset. We establish theoretical performance guarantees for our proposed algorithms, under both PAC error and mean-squared error. These performance guarantees translate to minimax optimality in several instances, and experiments confirm superior performance of our algorithms over other baseline techniques.

Cite as

Syomantak Chaudhuri and Thomas A. Courtade. Private Estimation When Data and Privacy Demands Are Correlated. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 3:1-3:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chaudhuri_et_al:LIPIcs.FORC.2025.3,
  author =	{Chaudhuri, Syomantak and Courtade, Thomas A.},
  title =	{{Private Estimation When Data and Privacy Demands Are Correlated}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{3:1--3:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.3},
  URN =		{urn:nbn:de:0030-drops-231305},
  doi =		{10.4230/LIPIcs.FORC.2025.3},
  annote =	{Keywords: Differential Privacy, Personalized Privacy, Heterogeneous Privacy, Correlations in Privacy}
}
Document
Hardness and Approximation Algorithms for Balanced Districting Problems

Authors: Prathamesh Dharangutte, Jie Gao, Shang-En Huang, and Fang-Yi Yu


Abstract
We introduce and study the problem of balanced districting, where given an undirected graph with vertices carrying two types of weights (different population, resource types, etc) the goal is to maximize the total weights covered in vertex disjoint districts such that each district is a star or (in general) a connected induced subgraph with the two weights to be balanced. This problem is strongly motivated by political redistricting, where contiguity, population balance, and compactness are essential. We provide hardness and approximation algorithms for this problem. In particular, we show NP-hardness for an approximation better than n^{1/2-δ} for any constant δ > 0 in general graphs even when the districts are star graphs, as well as NP-hardness on complete graphs, tree graphs, planar graphs and other restricted settings. On the other hand, we develop an algorithm for balanced star districting that gives an O(√n)-approximation on any graph (which is basically tight considering matching hardness of approximation results), an O(log n) approximation on planar graphs with extensions to minor-free graphs. Our algorithm uses a modified Whack-a-Mole algorithm [Bhattacharya, Kiss, and Saranurak, SODA 2023] to find a sparse solution of a fractional packing linear program (despite exponentially many variables) which requires a new design of a separation oracle specific for our balanced districting problem. To turn the fractional solution to a feasible integer solution, we adopt the randomized rounding algorithm by [Chan and Har-Peled, SoCG 2009]. To get a good approximation ratio of the rounding procedure, a crucial element in the analysis is the balanced scattering separators for planar graphs and minor-free graphs - separators that can be partitioned into a small number of k-hop independent sets for some constant k - which may find independent interest in solving other packing style problems. Further, our algorithm is versatile - the very same algorithm can be analyzed in different ways on various graph classes, which leads to class-dependent approximation ratios. We also provide a FPTAS algorithm for complete graphs and tree graphs, as well as greedy algorithms and approximation ratios when the district cardinality is bounded, the graph has bounded degree or the weights are binary. We refer the readers to the full version of the paper for complete set of results and proofs.

Cite as

Prathamesh Dharangutte, Jie Gao, Shang-En Huang, and Fang-Yi Yu. Hardness and Approximation Algorithms for Balanced Districting Problems. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 4:1-4:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dharangutte_et_al:LIPIcs.FORC.2025.4,
  author =	{Dharangutte, Prathamesh and Gao, Jie and Huang, Shang-En and Yu, Fang-Yi},
  title =	{{Hardness and Approximation Algorithms for Balanced Districting Problems}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{4:1--4:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.4},
  URN =		{urn:nbn:de:0030-drops-231310},
  doi =		{10.4230/LIPIcs.FORC.2025.4},
  annote =	{Keywords: Approximation algorithms, algorithmic fairness}
}
Document
Pessimism Traps and Algorithmic Interventions

Authors: Avrim Blum, Emily Diana, Kavya Ravichandran, and Alexander Tolbert


Abstract
In this paper, we relate the philosophical literature on pessimism traps to information cascades, a formal model derived from the economics and mathematics literature. A pessimism trap is a social pattern in which individuals in a community, in situations of uncertainty, copy the sub-optimal actions of others, despite their individual beliefs. This maps nicely onto the concept of an information cascade, which involves a sequence of agents making a decision between two alternatives, with a private signal of the superior alternative and a public history of others' actions. Key results from the economics literature show that information cascades occur with probability one in many contexts, and depending on the strength of the signal, populations can fall into the incorrect cascade very easily and quickly. Once formed, in the absence of external perturbation, a cascade cannot be broken - therefore, we derive an intervention that can be used to nudge a population from an incorrect to a correct cascade and, importantly, maintain the cascade once the subsidy is discontinued. We extend this to the case of multiple communities, each of which might have a different optimal action, and a government providing subsidies that cannot discriminate between communities and does not know which action is optimal for each. We study this both theoretically and empirically.

Cite as

Avrim Blum, Emily Diana, Kavya Ravichandran, and Alexander Tolbert. Pessimism Traps and Algorithmic Interventions. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 5:1-5:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{blum_et_al:LIPIcs.FORC.2025.5,
  author =	{Blum, Avrim and Diana, Emily and Ravichandran, Kavya and Tolbert, Alexander},
  title =	{{Pessimism Traps and Algorithmic Interventions}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{5:1--5:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.5},
  URN =		{urn:nbn:de:0030-drops-231321},
  doi =		{10.4230/LIPIcs.FORC.2025.5},
  annote =	{Keywords: Pessimism trap, opinion dynamics, algorithmic interventions, subsidy, decision-making}
}
Document
Near-Universally-Optimal Differentially Private Minimum Spanning Trees

Authors: Richard Hladík and Jakub Tětek


Abstract
Devising mechanisms with good beyond-worst-case input-dependent performance has been an important focus of differential privacy, with techniques such as smooth sensitivity, propose-test-release, or inverse sensitivity mechanism being developed to achieve this goal. This makes it very natural to use the notion of universal optimality in differential privacy. Universal optimality is a strong instance-specific optimality guarantee for problems on weighted graphs, which roughly states that for any fixed underlying (unweighted) graph, the algorithm is optimal in the worst-case sense, with respect to the possible setting of the edge weights. In this paper, we give the first such result in differential privacy. Namely, we prove that a simple differentially private mechanism for approximately releasing the minimum spanning tree is near-optimal in the sense of universal optimality for the 𝓁₁ neighbor relation. Previously, it was only known that this mechanism is nearly optimal in the worst case. We then focus on the 𝓁_∞ neighbor relation, for which the described mechanism is not optimal. We show that one may implement the exponential mechanism for MST in polynomial time, and that this results in universal near-optimality for both the 𝓁₁ and the 𝓁_∞ neighbor relations.

Cite as

Richard Hladík and Jakub Tětek. Near-Universally-Optimal Differentially Private Minimum Spanning Trees. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 6:1-6:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hladik_et_al:LIPIcs.FORC.2025.6,
  author =	{Hlad{\'\i}k, Richard and T\v{e}tek, Jakub},
  title =	{{Near-Universally-Optimal Differentially Private Minimum Spanning Trees}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{6:1--6:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.6},
  URN =		{urn:nbn:de:0030-drops-231337},
  doi =		{10.4230/LIPIcs.FORC.2025.6},
  annote =	{Keywords: differential privacy, universal optimality, minimum spanning trees}
}
Document
Kernel Multiaccuracy

Authors: Carol Xuan Long, Wael Alghamdi, Alexander Glynn, Yixuan Wu, and Flavio P. Calmon


Abstract
Predefined demographic groups often overlook the subpopulations most impacted by model errors, leading to a growing emphasis on data-driven methods that pinpoint where models underperform. The emerging field of multi-group fairness addresses this by ensuring models perform well across a wide range of group-defining functions, rather than relying on fixed demographic categories. We demonstrate that recently introduced notions of multi-group fairness can be equivalently formulated as integral probability metrics (IPM). IPMs are the common information-theoretic tool that underlie definitions such as multiaccuracy, multicalibration, and outcome indistinguishably. For multiaccuracy, this connection leads to a simple, yet powerful procedure for achieving multiaccuracy with respect to an infinite-dimensional class of functions defined by a reproducing kernel Hilbert space (RKHS): first perform a kernel regression of a model’s errors, then subtract the resulting function from a model’s predictions. We combine these results to develop a post-processing method that improves multiaccuracy with respect to bounded-norm functions in an RKHS, enjoys provable performance guarantees, and, in binary classification benchmarks, achieves favorable multiaccuracy relative to competing methods.

Cite as

Carol Xuan Long, Wael Alghamdi, Alexander Glynn, Yixuan Wu, and Flavio P. Calmon. Kernel Multiaccuracy. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 7:1-7:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{long_et_al:LIPIcs.FORC.2025.7,
  author =	{Long, Carol Xuan and Alghamdi, Wael and Glynn, Alexander and Wu, Yixuan and Calmon, Flavio P.},
  title =	{{Kernel Multiaccuracy}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{7:1--7:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.7},
  URN =		{urn:nbn:de:0030-drops-231341},
  doi =		{10.4230/LIPIcs.FORC.2025.7},
  annote =	{Keywords: algorithmic fairness, integral probability metrics, information theory}
}
Document
Differential Privacy Under Multiple Selections

Authors: Ashish Goel, Zhihao Jiang, Aleksandra Korolova, Kamesh Munagala, and Sahasrajit Sarmasarkar


Abstract
We consider the setting where a user with sensitive features wishes to obtain a recommendation from a server in a differentially private fashion. We propose a "multi-selection" architecture where the server can send back multiple recommendations and the user chooses one from these that matches best with their private features. When the user feature is one-dimensional - on an infinite line - and the accuracy measure is defined w.r.t some increasing function 𝔥(.) of the distance on the line, we precisely characterize the optimal mechanism that satisfies differential privacy. The specification of the optimal mechanism includes both the distribution of the noise that the user adds to its private value, and the algorithm used by the server to determine the set of results to send back as a response. We show that Laplace is an optimal noise distribution in this setting. Furthermore, we show that this optimal mechanism results in an error that is inversely proportional to the number of results returned when the function 𝔥(.) is the identity function.

Cite as

Ashish Goel, Zhihao Jiang, Aleksandra Korolova, Kamesh Munagala, and Sahasrajit Sarmasarkar. Differential Privacy Under Multiple Selections. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 8:1-8:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{goel_et_al:LIPIcs.FORC.2025.8,
  author =	{Goel, Ashish and Jiang, Zhihao and Korolova, Aleksandra and Munagala, Kamesh and Sarmasarkar, Sahasrajit},
  title =	{{Differential Privacy Under Multiple Selections}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{8:1--8:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.8},
  URN =		{urn:nbn:de:0030-drops-231353},
  doi =		{10.4230/LIPIcs.FORC.2025.8},
  annote =	{Keywords: Differential Privacy, Mechanism Design and Multi-Selection}
}
Document
Optimal Rates for Robust Stochastic Convex Optimization

Authors: Changyu Gao, Andrew Lowy, Xingyu Zhou, and Stephen J. Wright


Abstract
Machine learning algorithms in high-dimensional settings are highly susceptible to the influence of even a small fraction of structured outliers, making robust optimization techniques essential. In particular, within the ε-contamination model, where an adversary can inspect and replace up to an ε-fraction of the samples, a fundamental open problem is determining the optimal rates for robust stochastic convex optimization (SCO) under such contamination. We develop novel algorithms that achieve minimax-optimal excess risk (up to logarithmic factors) under the ε-contamination model. Our approach improves over existing algorithms, which are not only suboptimal but also require stringent assumptions, including Lipschitz continuity and smoothness of individual sample functions. By contrast, our optimal algorithms do not require these stringent assumptions, assuming only population-level smoothness of the loss. Moreover, our algorithms can be adapted to handle the case in which the covariance parameter is unknown, and can be extended to nonsmooth population risks via convolutional smoothing. We complement our algorithmic developments with a tight information-theoretic lower bound for robust SCO.

Cite as

Changyu Gao, Andrew Lowy, Xingyu Zhou, and Stephen J. Wright. Optimal Rates for Robust Stochastic Convex Optimization. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 9:1-9:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gao_et_al:LIPIcs.FORC.2025.9,
  author =	{Gao, Changyu and Lowy, Andrew and Zhou, Xingyu and Wright, Stephen J.},
  title =	{{Optimal Rates for Robust Stochastic Convex Optimization}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{9:1--9:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.9},
  URN =		{urn:nbn:de:0030-drops-231369},
  doi =		{10.4230/LIPIcs.FORC.2025.9},
  annote =	{Keywords: Adversarial Robustness, Machine Learning, Optimization Algorithms, Robust Optimization, Stochastic Convex Optimization}
}
Document
Count on Your Elders: Laplace vs Gaussian Noise

Authors: Joel Daniel Andersson, Rasmus Pagh, Teresa Anna Steiner, and Sahel Torkamani


Abstract
In recent years, Gaussian noise has become a popular tool in differentially private algorithms, often replacing Laplace noise which dominated the early literature on differential privacy. Gaussian noise is the standard approach to approximate differential privacy, often resulting in much higher utility than traditional (pure) differential privacy mechanisms. In this paper we argue that Laplace noise may in fact be preferable to Gaussian noise in many settings, in particular when we seek to achieve (ε,δ)-differential privacy for small values of δ. We consider two scenarios: First, we consider the problem of counting under continual observation and present a new generalization of the binary tree mechanism that uses a k-ary number system with negative digits to improve the privacy-accuracy trade-off. Our mechanism uses Laplace noise and whenever δ is sufficiently small it improves the mean squared error over the best possible (ε,δ)-differentially private factorization mechanisms based on Gaussian noise. Specifically, using k = 19 we get an asymptotic improvement over the bound given in the work by Henzinger, Upadhyay and Upadhyay (SODA 2023) when δ = O(T^{-0.92}). Second, we show that the noise added by the Gaussian mechanism can always be replaced by Laplace noise of comparable variance for the same (ε, δ)-differential privacy guarantee, and in fact for sufficiently small δ the variance of the Laplace noise becomes strictly better. This challenges the conventional wisdom that Gaussian noise should be used for high-dimensional noise. Finally, we study whether counting under continual observation may be easier in an average-case sense than in a worst-case sense. We show that, under pure differential privacy, the expected worst-case error for a random input must be Ω(log(T)/ε), matching the known lower bound for worst-case inputs.

Cite as

Joel Daniel Andersson, Rasmus Pagh, Teresa Anna Steiner, and Sahel Torkamani. Count on Your Elders: Laplace vs Gaussian Noise. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 10:1-10:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{andersson_et_al:LIPIcs.FORC.2025.10,
  author =	{Andersson, Joel Daniel and Pagh, Rasmus and Steiner, Teresa Anna and Torkamani, Sahel},
  title =	{{Count on Your Elders: Laplace vs Gaussian Noise}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{10:1--10:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.10},
  URN =		{urn:nbn:de:0030-drops-231376},
  doi =		{10.4230/LIPIcs.FORC.2025.10},
  annote =	{Keywords: differential privacy, continual observation, streaming, prefix sums, trees}
}
Document
Laplace Transform Interpretation of Differential Privacy

Authors: Rishav Chourasia, Uzair Javaid, and Biplab Sikdar


Abstract
We introduce a set of useful expressions of Differential Privacy (DP) notions in terms of Laplace transformations. The underlying bare-form expressions for these transforms appear in several works on analyzing DP, either as an integral or an expectation. We show that recognizing these expressions as Laplace transformations unlocks a new way to reason about DP properties by exploiting the duality between time and frequency domains. Leveraging our interpretation, we connect the (q, ρ(q))-Rényi DP curve and the (ε, δ(ε))-DP curve as being the Laplace and inverse-Laplace transforms of one another. Using our Laplace transform-based analysis, we also prove an adaptive composition theorem for (ε, δ)-DP guarantees that is exactly-tight (i.e., matches even in constants) for all values of ε. Additionally, we resolve an issue regarding symmetry of f-DP on subsampling that prevented equivalence across all functional DP notions.

Cite as

Rishav Chourasia, Uzair Javaid, and Biplab Sikdar. Laplace Transform Interpretation of Differential Privacy. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 11:1-11:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chourasia_et_al:LIPIcs.FORC.2025.11,
  author =	{Chourasia, Rishav and Javaid, Uzair and Sikdar, Biplab},
  title =	{{Laplace Transform Interpretation of Differential Privacy}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{11:1--11:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.11},
  URN =		{urn:nbn:de:0030-drops-231387},
  doi =		{10.4230/LIPIcs.FORC.2025.11},
  annote =	{Keywords: Differential Privacy, Composition Theorem, Laplace Transform}
}
Document
Infinitely Divisible Noise for Differential Privacy: Nearly Optimal Error in the High ε Regime

Authors: Charlie Harrison and Pasin Manurangsi


Abstract
Differential privacy (DP) can be achieved in a distributed manner, where multiple parties add independent noise such that their sum protects the overall dataset with DP. A common technique here is for each party to sample their noise from the decomposition of an infinitely divisible distribution. We analyze two mechanisms in this setting: 1) the generalized discrete Laplace (GDL) mechanism, whose distribution (which is closed under summation) follows from differences of i.i.d. negative binomial shares, and 2) the multi-scale discrete Laplace (MSDLap) mechanism, a novel mechanism following the sum of multiple i.i.d. discrete Laplace shares at different scales. For ε ≥ 1, our mechanisms can be parameterized to have O(Δ³ e^{-ε}) and O(min(Δ³ e^{-ε}, Δ² e^{-2ε/3})) MSE, respectively, where Δ denote the sensitivity; the latter bound matches known optimality results. Furthermore, the MSDLap mechanism has the optimal MSE including constants as ε → ∞. We also show a transformation from the discrete setting to the continuous setting, which allows us to transform both mechanisms to the continuous setting and thereby achieve the optimal O(Δ² e^{-2ε / 3}) MSE. To our knowledge, these are the first infinitely divisible additive noise mechanisms that achieve order-optimal MSE under pure DP for either the discrete or continuous setting, so our work shows formally there is no separation in utility when query-independent noise adding mechanisms are restricted to infinitely divisible noise. For the continuous setting, our result improves upon Pagh and Stausholm’s Arete distribution which gives an MSE of O(Δ² e^{-ε/4}) [Pagh and Stausholm, 2022]. Furthermore, we give an exact sampler tuned to efficiently implement the MSDLap mechanism, and we apply our results to improve a state of the art multi-message shuffle DP protocol from [Balle et al., 2020] in the high ε regime.

Cite as

Charlie Harrison and Pasin Manurangsi. Infinitely Divisible Noise for Differential Privacy: Nearly Optimal Error in the High ε Regime. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 12:1-12:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{harrison_et_al:LIPIcs.FORC.2025.12,
  author =	{Harrison, Charlie and Manurangsi, Pasin},
  title =	{{Infinitely Divisible Noise for Differential Privacy: Nearly Optimal Error in the High \epsilon Regime}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{12:1--12:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.12},
  URN =		{urn:nbn:de:0030-drops-231396},
  doi =		{10.4230/LIPIcs.FORC.2025.12},
  annote =	{Keywords: Differential Privacy, Distributed Noise Addition}
}
Document
Extended Abstract
Cost over Content: Information Choice in Trade (Extended Abstract)

Authors: Kristóf Madarász and Marek Pycia


Abstract
How much would buyers pay to have some control over what a seller knows about them? When deciding what information to provide to her counterpart, a privately-informed trader chooses between options that may differ both in their contents and in their costs. For a large class of static and dynamic trading environments where buyers choose from arbitrary sets of signal processes that reveal or obfuscate information to a seller, we establish a "cost-over-content" theorem. In equilibrium, buyers only choose cheapest processes, regardless of the information content they provide. Pooling on any cheapest process is an equilibrium. Our paper uncovers a general source of market failure linked to the direct cost of information choice with consequences for the role of information defaults. We explore applications to bargaining, signaling, disclosure, consumer privacy, and data trade. At the FORC presentation we focus on the privacy application. Our cost-over-content results explain why consumers may choose little protection of their private information even when protecting it would improve their welfare; we show that this so-called privacy paradox is an equilibrium phenomenon driven by market power. Our results also lead to predictions on how an online-shopping platform provider designs the information flows between sellers and the buyers. While the platform can charge both the buyer and the seller for its offering of information structures, following the logic of our cost-over-content theorem, we show that the platform neither charges nor compensates the buyer for choosing an information structure. At the same time, the platform charges the seller and maximizes the value of the information passed to the seller, as measured in the seller’s direct profit from interacting with the buyer. This asymmetry is relevant for the impact of the widespread ability to track consumers on economic outcomes and the distribution of gains from trade, and it is relevant for the effectiveness of various consumer-protection policies.

Cite as

Kristóf Madarász and Marek Pycia. Cost over Content: Information Choice in Trade (Extended Abstract). In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, p. 13:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{madarasz_et_al:LIPIcs.FORC.2025.13,
  author =	{Madar\'{a}sz, Krist\'{o}f and Pycia, Marek},
  title =	{{Cost over Content: Information Choice in Trade}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{13:1--13:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.13},
  URN =		{urn:nbn:de:0030-drops-231409},
  doi =		{10.4230/LIPIcs.FORC.2025.13},
  annote =	{Keywords: Communication Costs, Default-based Regulation, Data Trade, Dynamic Pricing, Privacy Paradox, Signaling, Disclosure, Efficiency, Online Platforms}
}
Document
Model Ensembling for Constrained Optimization

Authors: Ira Globus Harris, Varun Gupta, Michael Kearns, and Aaron Roth


Abstract
Many instances of decision making under objective uncertainty can be decomposed into two steps: predicting the objective function and then optimizing for the best feasible action under the estimate of the objective vector. We study the problem of ensembling models for optimization of uncertain linear objectives under arbitrary constraints. We imagine we are given a collection of predictive models mapping a feature space to multi-dimensional real-valued predictions, which form the coefficients of a linear objective that we would like to optimize. We give two ensembling methods that can provably result in transparent decisions that strictly improve on all initial policies. The first method operates in the "white box" setting in which we have access to the underlying prediction models and the second in the "black box" setting in which we only have access to the induced decisions (in the downstream optimization problem) of the constituent models, but not their underlying point predictions. They are transparent or trustworthy in the sense that the user can reliably predict long-term ensemble rewards even if the instance by instance predictions are imperfect.

Cite as

Ira Globus Harris, Varun Gupta, Michael Kearns, and Aaron Roth. Model Ensembling for Constrained Optimization. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 14:1-14:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{globusharris_et_al:LIPIcs.FORC.2025.14,
  author =	{Globus Harris, Ira and Gupta, Varun and Kearns, Michael and Roth, Aaron},
  title =	{{Model Ensembling for Constrained Optimization}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{14:1--14:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.14},
  URN =		{urn:nbn:de:0030-drops-231412},
  doi =		{10.4230/LIPIcs.FORC.2025.14},
  annote =	{Keywords: model ensembling, trustworthy AI, decision-making under uncertainty}
}
Document
Differentially Private High-Dimensional Approximate Range Counting, Revisited

Authors: Martin Aumüller, Fabrizio Boninsegna, and Francesco Silvestri


Abstract
Locality Sensitive Filters are known for offering a quasi-linear space data structure with rigorous guarantees for the Approximate Near Neighbor search (ANN) problem. Building on Locality Sensitive Filters, we derive a simple data structure for the Approximate Near Neighbor Counting (ANNC) problem under differential privacy (DP). Moreover, we provide a simple analysis leveraging a connection with concomitant statistics and extreme value theory. Our approach produces a simple data structure with a tunable parameter that regulates a trade-off between space-time and utility. Through this trade-off, our data structure achieves the same performance as the recent findings of Andoni et al. (NeurIPS 2023) while offering better utility at the cost of higher space and query time. In addition, we provide a more efficient algorithm under pure ε-DP and elucidate the connection between ANN and differentially private ANNC. As a side result, the paper provides a more compact description and analysis of Locality Sensitive Filters for Fair Near Neighbor Search, improving a previous result in Aumüller et al. (TODS 2022).

Cite as

Martin Aumüller, Fabrizio Boninsegna, and Francesco Silvestri. Differentially Private High-Dimensional Approximate Range Counting, Revisited. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 15:1-15:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{aumuller_et_al:LIPIcs.FORC.2025.15,
  author =	{Aum\"{u}ller, Martin and Boninsegna, Fabrizio and Silvestri, Francesco},
  title =	{{Differentially Private High-Dimensional Approximate Range Counting, Revisited}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{15:1--15:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.15},
  URN =		{urn:nbn:de:0030-drops-231426},
  doi =		{10.4230/LIPIcs.FORC.2025.15},
  annote =	{Keywords: Differential Privacy, Locality Sensitive Filters, Approximate Range Counting, Concominant Statistics}
}
Document
Smooth Calibration and Decision Making

Authors: Jason Hartline, Yifan Wu, and Yunran Yang


Abstract
Calibration requires predictor outputs to be consistent with their Bayesian posteriors. For machine learning predictors that do not distinguish between small perturbations, calibration errors are continuous in predictions, e.g. smooth calibration error [Foster and Hart, 2018], distance to calibration [Błasiok et al., 2023]. On the contrary, decision-makers who use predictions make optimal decisions discontinuously in probabilistic space, experiencing loss from miscalibration discontinuously. Calibration errors for decision-making are thus discontinuous, e.g., Expected Calibration Error [Foster and Vohra, 1997], and Calibration Decision Loss [Hu and Wu, 2024]. Thus, predictors with a low calibration error for machine learning may suffer a high calibration error for decision-making, i.e. they may not be trustworthy for decision-makers optimizing assuming their predictions are correct. It is natural to ask if post-processing a predictor with a low calibration error for machine learning is without loss to achieve a low calibration error for decision-making. In our paper, we show post-processing an online predictor with ε distance to calibration achieves O(√{ε}) ECE and CDL, which is asymptotically optimal. The post-processing algorithm adds noise to make predictions differentially private. The optimal bound from low distance to calibration predictors from post-processing is non-optimal compared with existing online calibration algorithms that directly optimize for ECE and CDL.

Cite as

Jason Hartline, Yifan Wu, and Yunran Yang. Smooth Calibration and Decision Making. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 16:1-16:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hartline_et_al:LIPIcs.FORC.2025.16,
  author =	{Hartline, Jason and Wu, Yifan and Yang, Yunran},
  title =	{{Smooth Calibration and Decision Making}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{16:1--16:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.16},
  URN =		{urn:nbn:de:0030-drops-231438},
  doi =		{10.4230/LIPIcs.FORC.2025.16},
  annote =	{Keywords: Calibration, calibration errors, decision making, differential privacy}
}
Document
Debiasing Functions of Private Statistics in Postprocessing

Authors: Flavio Calmon, Elbert Du, Cynthia Dwork, Brian Finley, and Grigory Franguridi


Abstract
Given a differentially private unbiased estimate q̃ = q(D) +ν of a statistic q(D), we wish to obtain unbiased estimates of functions of q(D), such as 1/q(D), solely through post-processing of q̃, with no further access to the confidential dataset D. To this end, we adapt the deconvolution method used for unbiased estimation in the statistical literature, deriving unbiased estimators for a broad family of twice-differentiable functions - those that are tempered distributions - when the privacy-preserving noise ν is drawn from the Laplace distribution (Dwork et al., 2006). We further extend this technique to functions other than tempered distributions, deriving approximately optimal estimators that are unbiased for values in a user-specified interval (possibly extending to ± ∞). We use these results to derive an unbiased estimator for private means when the size n of the dataset is not publicly known. In a numerical application, we find that a mechanism that uses our estimator to return an unbiased sample size and mean outperforms a mechanism that instead uses the previously known unbiased privacy mechanism for such means (Kamath et al., 2023). We also apply our estimators to develop unbiased transformation mechanisms for per-record differential privacy, a privacy concept in which the privacy guarantee is a public function of a record’s value (Seeman et al., 2024). Our mechanisms provide stronger privacy guarantees than those in prior work (Finley et al., 2024) by using Laplace, rather than Gaussian, noise. Finally, using a different approach, we go beyond Laplace noise by deriving unbiased estimators for polynomials under the weak condition that the noise distribution has sufficiently many moments.

Cite as

Flavio Calmon, Elbert Du, Cynthia Dwork, Brian Finley, and Grigory Franguridi. Debiasing Functions of Private Statistics in Postprocessing. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 17:1-17:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{calmon_et_al:LIPIcs.FORC.2025.17,
  author =	{Calmon, Flavio and Du, Elbert and Dwork, Cynthia and Finley, Brian and Franguridi, Grigory},
  title =	{{Debiasing Functions of Private Statistics in Postprocessing}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{17:1--17:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.17},
  URN =		{urn:nbn:de:0030-drops-231449},
  doi =		{10.4230/LIPIcs.FORC.2025.17},
  annote =	{Keywords: Differential privacy, deconvolution, unbiasedness}
}
Document
Extended Abstract
Differentially Private Sequential Learning (Extended Abstract)

Authors: Yuxin Liu and M. Amin Rahimian


Abstract
In a differentially private sequential learning setting, agents introduce endogenous noise into their public actions to limit information leakage about their private signals. The impact of this privacy noise varies depending on whether the signals are continuous or binary. For continuous signals and a finite privacy budget ε > 0, we propose a smooth randomized response mechanism that adapts the noise level based on the distance to a decision threshold, in contrast to the standard randomized response with uniform noise. This allows agents’ actions to better reflect both their private signals and public history, achieving an accelerated convergence rate of Θ_ε(log n), surpassing the rate of Θ(√{log n}) in the non-private regime. In this case, privacy noise helps to amplify the log-likelihood ratio over time, improving information aggregation. For binary signals, differential privacy consistently degrades learning performance by reducing the probability of correct cascades compared to the non-private baseline. In this case, agents tend to use a constant randomized response strategy before the information cascade occurs. This constant privacy noise reduces the informativeness of their actions and hinders effective learning until an information cascade occurs. However, even for binary signals, the probability of correct cascades does not vary monotonically with the privacy budget ε. There are values of ε where the probability of a correct cascade increases as the privacy budget decreases because the threshold for initiating an information cascade increases by one when the privacy budget crosses below those values.

Cite as

Yuxin Liu and M. Amin Rahimian. Differentially Private Sequential Learning (Extended Abstract). In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 18:1-18:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.FORC.2025.18,
  author =	{Liu, Yuxin and Rahimian, M. Amin},
  title =	{{Differentially Private Sequential Learning}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{18:1--18:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.18},
  URN =		{urn:nbn:de:0030-drops-231450},
  doi =		{10.4230/LIPIcs.FORC.2025.18},
  annote =	{Keywords: Differential Privacy, Sequential Learning, Randomized Response, Learning Efficiency}
}
Document
Mapping the Tradeoffs and Limitations of Algorithmic Fairness

Authors: Etam Benger and Katrina Ligett


Abstract
Sufficiency and separation are two fundamental criteria in classification fairness. For binary classifiers, these concepts correspond to subgroup calibration and equalized odds, respectively, and are known to be incompatible except in trivial cases. In this work, we explore a relaxation of these criteria based on f-divergences between distributions - essentially the same relaxation studied in the literature on approximate multicalibration - analyze their relationships, and derive implications for fair representations and downstream uses (post-processing) of representations. We show that when a protected attribute is determinable from features present in the data, the (relaxed) criteria of sufficiency and separation exhibit a tradeoff, forming a convex Pareto frontier. Moreover, we prove that when a protected attribute is not fully encoded in the data, achieving full sufficiency may be impossible. This finding not only strengthens the case against "fairness through unawareness" but also highlights an important caveat for work on (multi-)calibration.

Cite as

Etam Benger and Katrina Ligett. Mapping the Tradeoffs and Limitations of Algorithmic Fairness. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 19:1-19:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{benger_et_al:LIPIcs.FORC.2025.19,
  author =	{Benger, Etam and Ligett, Katrina},
  title =	{{Mapping the Tradeoffs and Limitations of Algorithmic Fairness}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{19:1--19:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.19},
  URN =		{urn:nbn:de:0030-drops-231465},
  doi =		{10.4230/LIPIcs.FORC.2025.19},
  annote =	{Keywords: Algorithmic fairness, information theory, sufficiency-separation tradeoff}
}
Document
Group Fairness and Multi-Criteria Optimization in School Assignment

Authors: Santhini K. A., Kamesh Munagala, Meghana Nasre, and Govind S. Sankar


Abstract
We consider the problem of assigning students to schools when students have different utilities for schools and schools have limited capacities. The students belong to demographic groups, and fairness over these groups is captured either by concave objectives, or additional constraints on the utility of the groups. We present approximation algorithms for this assignment problem with group fairness via convex program rounding. These algorithms achieve various trade-offs between capacity violation and running time. We also show that our techniques easily extend to the setting where there are arbitrary constraints on the feasible assignment, capturing multi-criteria optimization. We present simulation results that demonstrate that the rounding methods are practical even on large problem instances, with the empirical capacity violation being much better than the theoretical bounds.

Cite as

Santhini K. A., Kamesh Munagala, Meghana Nasre, and Govind S. Sankar. Group Fairness and Multi-Criteria Optimization in School Assignment. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 20:1-20:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{k.a._et_al:LIPIcs.FORC.2025.20,
  author =	{K. A., Santhini and Munagala, Kamesh and Nasre, Meghana and S. Sankar, Govind},
  title =	{{Group Fairness and Multi-Criteria Optimization in School Assignment}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{20:1--20:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.20},
  URN =		{urn:nbn:de:0030-drops-231471},
  doi =		{10.4230/LIPIcs.FORC.2025.20},
  annote =	{Keywords: School Assignment, Approximation Algorithms, Group Fairness}
}
Document
OWA for Bipartite Assignments

Authors: Jabari Hastings, Sigal Oren, and Omer Reingold


Abstract
In resource allocation problems, a central planner often strives to have a fair assignment. A challenge they might face, however, is that there are several objectives that could be argued to be fair, such as the max-min and maximum social welfare. In this work, we study bipartite assignment problems involving the optimization of a class of functions that is sensitive to the relative utilities derived by individuals in allocation and captures these traditional objectives. One subclass of these functions consists of the "fair" ordered weighted averages (OWA) introduced by Lesca et al. (Algorithmica 2019), which are most sensitive to the utilities received by the worst-off individuals. We show that the task of optimizing an arbitrary function from this subclass belongs to the complexity class FPT, resolving an open question raised by that work; we also provide a polynomial time approximation scheme (PTAS). In addition, we introduce and study another subclass of evaluation functions that targets the average welfare attained within some interval of the economic ladder (e.g., the bottom 10%, middle 50%, or top 80%). We provide an efficient algorithm that can be used to optimize the welfare for an arbitrary interval and also show how the approach can be used to approximate more general evaluation functions.

Cite as

Jabari Hastings, Sigal Oren, and Omer Reingold. OWA for Bipartite Assignments. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 21:1-21:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hastings_et_al:LIPIcs.FORC.2025.21,
  author =	{Hastings, Jabari and Oren, Sigal and Reingold, Omer},
  title =	{{OWA for Bipartite Assignments}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{21:1--21:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.21},
  URN =		{urn:nbn:de:0030-drops-231482},
  doi =		{10.4230/LIPIcs.FORC.2025.21},
  annote =	{Keywords: fairness, matchings, approximation algorithms}
}
Document
When Does a Predictor Know Its Own Loss?

Authors: Aravind Gollakota, Parikshit Gopalan, Aayush Karan, Charlotte Peale, and Udi Wieder


Abstract
Given a predictor and a loss function, how well can we predict the loss that the predictor will incur on an input? This is the problem of loss prediction, a key computational task associated with uncertainty estimation for a predictor. In a classification setting, a predictor will typically predict a distribution over labels and hence have its own estimate of the loss that it will incur, given by the entropy of the predicted distribution. Should we trust this estimate? In other words, when does the predictor know what it knows and what it does not know? In this work we study the theoretical foundations of loss prediction. Our main contribution is to establish tight connections between nontrivial loss prediction and certain forms of multicalibration [Ursula Hébert-Johnson et al., 2018], a multigroup fairness notion that asks for calibrated predictions across computationally identifiable subgroups. Formally, we show that a loss predictor that is able to improve on the self-estimate of a predictor yields a witness to a failure of multicalibration, and vice versa. This has the implication that nontrivial loss prediction is in effect no easier or harder than auditing for multicalibration. We support our theoretical results with experiments that show a robust positive correlation between the multicalibration error of a predictor and the efficacy of training a loss predictor.

Cite as

Aravind Gollakota, Parikshit Gopalan, Aayush Karan, Charlotte Peale, and Udi Wieder. When Does a Predictor Know Its Own Loss?. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 22:1-22:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gollakota_et_al:LIPIcs.FORC.2025.22,
  author =	{Gollakota, Aravind and Gopalan, Parikshit and Karan, Aayush and Peale, Charlotte and Wieder, Udi},
  title =	{{When Does a Predictor Know Its Own Loss?}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{22:1--22:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.22},
  URN =		{urn:nbn:de:0030-drops-231490},
  doi =		{10.4230/LIPIcs.FORC.2025.22},
  annote =	{Keywords: loss prediction, multicalibration, active learning, algorithmic fairness, calibration, predictive uncertainty, uncertainty estimation, machine learning theory}
}
Document
The Correlated Gaussian Sparse Histogram Mechanism

Authors: Christian Janos Lebeda and Lukas Retschmeier


Abstract
We consider the problem of releasing a sparse histogram under (ε, δ)-differential privacy. The stability histogram independently adds noise from a Laplace or Gaussian distribution to the non-zero entries and removes those noisy counts below a threshold. Thereby, the introduction of new non-zero values between neighboring histograms is only revealed with probability at most δ, and typically, the value of the threshold dominates the error of the mechanism. We consider the variant of the stability histogram with Gaussian noise. Recent works ([Joseph and Yu, COLT '24] and [Lebeda, SOSA '25]) reduced the error for private histograms using correlated Gaussian noise. However, these techniques can not be directly applied in the very sparse setting. Instead, we adopt Lebeda’s technique and show that adding correlated noise to the non-zero counts only allows us to reduce the magnitude of noise when we have a sparsity bound. This, in turn, allows us to use a lower threshold by up to a factor of 1/2 compared to the non-correlated noise mechanism. We then extend our mechanism to a setting without a known bound on sparsity. Additionally, we show that correlated noise can give a similar improvement for the more practical discrete Gaussian mechanism.

Cite as

Christian Janos Lebeda and Lukas Retschmeier. The Correlated Gaussian Sparse Histogram Mechanism. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 23:1-23:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{lebeda_et_al:LIPIcs.FORC.2025.23,
  author =	{Lebeda, Christian Janos and Retschmeier, Lukas},
  title =	{{The Correlated Gaussian Sparse Histogram Mechanism}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{23:1--23:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.23},
  URN =		{urn:nbn:de:0030-drops-231503},
  doi =		{10.4230/LIPIcs.FORC.2025.23},
  annote =	{Keywords: differential privacy, correlated noise, sparse gaussian histograms}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail