Search Results

Documents authored by Bastani, Osbert


Document
Extended Abstract
Rethinking Fairness for Human-AI Collaboration (Extended Abstract)

Authors: Haosen Ge, Hamsa Bastani, and Osbert Bastani

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Most work on algorithmic fairness focuses on whether the algorithm makes fair decisions in isolation. Yet, these algorithms are rarely used in high-stakes settings without human oversight, since there are still considerable legal and regulatory challenges to full automation. Moreover, many believe that human-AI collaboration is superior to full automation because human experts may have auxiliary information that can help correct the mistakes of algorithms, producing better decisions than the human or algorithm alone. However, human-AI collaboration introduces new complexities - the overall outcomes now depend not only on the algorithmic recommendations, but also on the subset of individuals for whom the human decision-maker complies with the algorithmic recommendation. Recent studies have shown that selective compliance with algorithms can amplify discrimination relative to the prior human policy, even if the algorithmic policy is fair in the traditional sense. As a consequence, ensuring equitable outcomes requires fundamentally different algorithmic design principles that ensure robustness to the decision-maker’s (a priori unknown) compliance pattern. To resolve this state of affairs, we introduce the notion of compliance-robust algorithms - i.e., algorithmic decision policies that are guaranteed to (weakly) improve fairness in final outcomes, regardless of the human’s (unknown) compliance pattern with algorithmic recommendations. In particular, given a human decision-maker and her policy (without access to AI assistance), we characterize the class of algorithmic recommendations that never result in collaborative final outcomes that are less fair than the pre-existing human policy, even if the decision-maker’s compliance pattern is adversarial. Next, we prove that there exists considerable tension between traditional algorithmic fairness and compliance-robust fairness. Unless the true data-generating process is itself perfectly fair, it can be infeasible to design an algorithmic policy that simultaneously satisfies traditional algorithmic fairness, is compliance-robustly fair, and is more accurate than the human-only policy; this raises the question of whether traditional fairness is even a desirable constraint to enforce for human-AI collaboration. If the goal is to improve fairness and accuracy in human-AI collaborative outcomes, it may be preferable to design an algorithmic policy that is accurate and compliance-robustly fair, but not traditionally fair. Our last result shows that the tension between traditional fairness and compliance-robust fairness is prevalent. Specifically, we prove that for a broad class of fairness definitions, fair policies are not necessarily compliance-robustly fair, implying that compliance-robust fairness imposes fundamentally different constraints compared to traditional fairness.

Cite as

Haosen Ge, Hamsa Bastani, and Osbert Bastani. Rethinking Fairness for Human-AI Collaboration (Extended Abstract). In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, p. 52:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ge_et_al:LIPIcs.ITCS.2024.52,
  author =	{Ge, Haosen and Bastani, Hamsa and Bastani, Osbert},
  title =	{{Rethinking Fairness for Human-AI Collaboration}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{52:1--52:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.52},
  URN =		{urn:nbn:de:0030-drops-195805},
  doi =		{10.4230/LIPIcs.ITCS.2024.52},
  annote =	{Keywords: fairness, human-AI collaboration, selective compliance}
}
Document
Eventually Sound Points-To Analysis with Specifications

Authors: Osbert Bastani, Rahul Sharma, Lazaro Clapp, Saswat Anand, and Alex Aiken

Published in: LIPIcs, Volume 134, 33rd European Conference on Object-Oriented Programming (ECOOP 2019)


Abstract
Static analyses make the increasingly tenuous assumption that all source code is available for analysis; for example, large libraries often call into native code that cannot be analyzed. We propose a points-to analysis that initially makes optimistic assumptions about missing code, and then inserts runtime checks that report counterexamples to these assumptions that occur during execution. Our approach guarantees eventual soundness, which combines two guarantees: (i) the runtime checks are guaranteed to catch the first counterexample that occurs during any execution, in which case execution can be terminated to prevent harm, and (ii) only finitely many counterexamples ever occur, implying that the static analysis eventually becomes statically sound with respect to all remaining executions. We implement Optix, an eventually sound points-to analysis for Android apps, where the Android framework is missing. We show that the runtime checks added by Optix incur low overhead on real programs, and demonstrate how Optix improves a client information flow analysis for detecting Android malware.

Cite as

Osbert Bastani, Rahul Sharma, Lazaro Clapp, Saswat Anand, and Alex Aiken. Eventually Sound Points-To Analysis with Specifications. In 33rd European Conference on Object-Oriented Programming (ECOOP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 134, pp. 11:1-11:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bastani_et_al:LIPIcs.ECOOP.2019.11,
  author =	{Bastani, Osbert and Sharma, Rahul and Clapp, Lazaro and Anand, Saswat and Aiken, Alex},
  title =	{{Eventually Sound Points-To Analysis with Specifications}},
  booktitle =	{33rd European Conference on Object-Oriented Programming (ECOOP 2019)},
  pages =	{11:1--11:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-111-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{134},
  editor =	{Donaldson, Alastair F.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2019.11},
  URN =		{urn:nbn:de:0030-drops-108038},
  doi =		{10.4230/LIPIcs.ECOOP.2019.11},
  annote =	{Keywords: specification inference, static points-to analysis, runtime monitoring}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail