Search Results

Documents authored by Kearns, Michael


Document
Model Ensembling for Constrained Optimization

Authors: Ira Globus Harris, Varun Gupta, Michael Kearns, and Aaron Roth

Published in: LIPIcs, Volume 329, 6th Symposium on Foundations of Responsible Computing (FORC 2025)


Abstract
Many instances of decision making under objective uncertainty can be decomposed into two steps: predicting the objective function and then optimizing for the best feasible action under the estimate of the objective vector. We study the problem of ensembling models for optimization of uncertain linear objectives under arbitrary constraints. We imagine we are given a collection of predictive models mapping a feature space to multi-dimensional real-valued predictions, which form the coefficients of a linear objective that we would like to optimize. We give two ensembling methods that can provably result in transparent decisions that strictly improve on all initial policies. The first method operates in the "white box" setting in which we have access to the underlying prediction models and the second in the "black box" setting in which we only have access to the induced decisions (in the downstream optimization problem) of the constituent models, but not their underlying point predictions. They are transparent or trustworthy in the sense that the user can reliably predict long-term ensemble rewards even if the instance by instance predictions are imperfect.

Cite as

Ira Globus Harris, Varun Gupta, Michael Kearns, and Aaron Roth. Model Ensembling for Constrained Optimization. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 14:1-14:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{globusharris_et_al:LIPIcs.FORC.2025.14,
  author =	{Globus Harris, Ira and Gupta, Varun and Kearns, Michael and Roth, Aaron},
  title =	{{Model Ensembling for Constrained Optimization}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{14:1--14:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.14},
  URN =		{urn:nbn:de:0030-drops-231412},
  doi =		{10.4230/LIPIcs.FORC.2025.14},
  annote =	{Keywords: model ensembling, trustworthy AI, decision-making under uncertainty}
}
Document
Balanced Filtering via Disclosure-Controlled Proxies

Authors: Siqi Deng, Emily Diana, Michael Kearns, and Aaron Roth

Published in: LIPIcs, Volume 295, 5th Symposium on Foundations of Responsible Computing (FORC 2024)


Abstract
We study the problem of collecting a cohort or set that is balanced with respect to sensitive groups when group membership is unavailable or prohibited from use at deployment time. Specifically, our deployment-time collection mechanism does not reveal significantly more about the group membership of any individual sample than can be ascertained from base rates alone. To do this, we study a learner that can use a small set of labeled data to train a proxy function that can later be used for this filtering or selection task. We then associate the range of the proxy function with sampling probabilities; given a new example, we classify it using our proxy function and then select it with probability corresponding to its proxy classification. Importantly, we require that the proxy classification does not reveal significantly more information about the sensitive group membership of any individual example compared to population base rates alone (i.e., the level of disclosure should be controlled) and show that we can find such a proxy in a sample- and oracle-efficient manner. Finally, we experimentally evaluate our algorithm and analyze its generalization properties.

Cite as

Siqi Deng, Emily Diana, Michael Kearns, and Aaron Roth. Balanced Filtering via Disclosure-Controlled Proxies. In 5th Symposium on Foundations of Responsible Computing (FORC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 295, pp. 4:1-4:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{deng_et_al:LIPIcs.FORC.2024.4,
  author =	{Deng, Siqi and Diana, Emily and Kearns, Michael and Roth, Aaron},
  title =	{{Balanced Filtering via Disclosure-Controlled Proxies}},
  booktitle =	{5th Symposium on Foundations of Responsible Computing (FORC 2024)},
  pages =	{4:1--4:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-319-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{295},
  editor =	{Rothblum, Guy N.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2024.4},
  URN =		{urn:nbn:de:0030-drops-200872},
  doi =		{10.4230/LIPIcs.FORC.2024.4},
  annote =	{Keywords: Algorithms, Sampling, Ethical/Societal Implications}
}
Document
An Algorithmic Framework for Fairness Elicitation

Authors: Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, Logan Stapleton, and Zhiwei Steven Wu

Published in: LIPIcs, Volume 192, 2nd Symposium on Foundations of Responsible Computing (FORC 2021)


Abstract
We consider settings in which the right notion of fairness is not captured by simple mathematical definitions (such as equality of error rates across groups), but might be more complex and nuanced and thus require elicitation from individual or collective stakeholders. We introduce a framework in which pairs of individuals can be identified as requiring (approximately) equal treatment under a learned model, or requiring ordered treatment such as "applicant Alice should be at least as likely to receive a loan as applicant Bob". We provide a provably convergent and oracle efficient algorithm for learning the most accurate model subject to the elicited fairness constraints, and prove generalization bounds for both accuracy and fairness. This algorithm can also combine the elicited constraints with traditional statistical fairness notions, thus "correcting" or modifying the latter by the former. We report preliminary findings of a behavioral study of our framework using human-subject fairness constraints elicited on the COMPAS criminal recidivism dataset.

Cite as

Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, Logan Stapleton, and Zhiwei Steven Wu. An Algorithmic Framework for Fairness Elicitation. In 2nd Symposium on Foundations of Responsible Computing (FORC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 192, pp. 2:1-2:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{jung_et_al:LIPIcs.FORC.2021.2,
  author =	{Jung, Christopher and Kearns, Michael and Neel, Seth and Roth, Aaron and Stapleton, Logan and Wu, Zhiwei Steven},
  title =	{{An Algorithmic Framework for Fairness Elicitation}},
  booktitle =	{2nd Symposium on Foundations of Responsible Computing (FORC 2021)},
  pages =	{2:1--2:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-187-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{192},
  editor =	{Ligett, Katrina and Gupta, Swati},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2021.2},
  URN =		{urn:nbn:de:0030-drops-138701},
  doi =		{10.4230/LIPIcs.FORC.2021.2},
  annote =	{Keywords: Fairness, Fairness Elicitation}
}
Document
Lexicographically Fair Learning: Algorithms and Generalization

Authors: Emily Diana, Wesley Gill, Ira Globus-Harris, Michael Kearns, Aaron Roth, and Saeed Sharifi-Malvajerdi

Published in: LIPIcs, Volume 192, 2nd Symposium on Foundations of Responsible Computing (FORC 2021)


Abstract
We extend the notion of minimax fairness in supervised learning problems to its natural conclusion: lexicographic minimax fairness (or lexifairness for short). Informally, given a collection of demographic groups of interest, minimax fairness asks that the error of the group with the highest error be minimized. Lexifairness goes further and asks that amongst all minimax fair solutions, the error of the group with the second highest error should be minimized, and amongst all of those solutions, the error of the group with the third highest error should be minimized, and so on. Despite its naturalness, correctly defining lexifairness is considerably more subtle than minimax fairness, because of inherent sensitivity to approximation error. We give a notion of approximate lexifairness that avoids this issue, and then derive oracle-efficient algorithms for finding approximately lexifair solutions in a very general setting. When the underlying empirical risk minimization problem absent fairness constraints is convex (as it is, for example, with linear and logistic regression), our algorithms are provably efficient even in the worst case. Finally, we show generalization bounds - approximate lexifairness on the training sample implies approximate lexifairness on the true distribution with high probability. Our ability to prove generalization bounds depends on our choosing definitions that avoid the instability of naive definitions.

Cite as

Emily Diana, Wesley Gill, Ira Globus-Harris, Michael Kearns, Aaron Roth, and Saeed Sharifi-Malvajerdi. Lexicographically Fair Learning: Algorithms and Generalization. In 2nd Symposium on Foundations of Responsible Computing (FORC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 192, pp. 6:1-6:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{diana_et_al:LIPIcs.FORC.2021.6,
  author =	{Diana, Emily and Gill, Wesley and Globus-Harris, Ira and Kearns, Michael and Roth, Aaron and Sharifi-Malvajerdi, Saeed},
  title =	{{Lexicographically Fair Learning: Algorithms and Generalization}},
  booktitle =	{2nd Symposium on Foundations of Responsible Computing (FORC 2021)},
  pages =	{6:1--6:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-187-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{192},
  editor =	{Ligett, Katrina and Gupta, Swati},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2021.6},
  URN =		{urn:nbn:de:0030-drops-138748},
  doi =		{10.4230/LIPIcs.FORC.2021.6},
  annote =	{Keywords: Fair Learning, Lexicographic Fairness, Online Learning, Game Theory}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail