3 Search Results for "Singh, Somesh"


Document
Private Estimation When Data and Privacy Demands Are Correlated

Authors: Syomantak Chaudhuri and Thomas A. Courtade

Published in: LIPIcs, Volume 329, 6th Symposium on Foundations of Responsible Computing (FORC 2025)


Abstract
Differential Privacy (DP) is the current gold-standard for ensuring privacy for statistical queries. Estimation problems under DP constraints appearing in the literature have largely focused on providing equal privacy to all users. We consider the problems of empirical mean estimation for univariate data and frequency estimation for categorical data, both subject to heterogeneous privacy constraints. Each user, contributing a sample to the dataset, is allowed to have a different privacy demand. The dataset itself is assumed to be worst-case and we study both problems under two different formulations - first, where privacy demands and data may be correlated, and second, where correlations are weakened by random permutation of the dataset. We establish theoretical performance guarantees for our proposed algorithms, under both PAC error and mean-squared error. These performance guarantees translate to minimax optimality in several instances, and experiments confirm superior performance of our algorithms over other baseline techniques.

Cite as

Syomantak Chaudhuri and Thomas A. Courtade. Private Estimation When Data and Privacy Demands Are Correlated. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 3:1-3:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chaudhuri_et_al:LIPIcs.FORC.2025.3,
  author =	{Chaudhuri, Syomantak and Courtade, Thomas A.},
  title =	{{Private Estimation When Data and Privacy Demands Are Correlated}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{3:1--3:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.3},
  URN =		{urn:nbn:de:0030-drops-231305},
  doi =		{10.4230/LIPIcs.FORC.2025.3},
  annote =	{Keywords: Differential Privacy, Personalized Privacy, Heterogeneous Privacy, Correlations in Privacy}
}
Document
Academic Track
A View on Vulnerabilites: The Security Challenges of XAI (Academic Track)

Authors: Elisabeth Pachl, Fabian Langer, Thora Markert, and Jeanette Miriam Lorenz

Published in: OASIcs, Volume 126, Symposium on Scaling AI Assessments (SAIA 2024)


Abstract
Modern deep learning methods have long been considered as black-boxes due to their opaque decision-making processes. Explainable Artificial Intelligence (XAI), however, has turned the tables: it provides insight into how these models work, promoting transparency that is crucial for accountability. Yet, recent developments in adversarial machine learning have highlighted vulnerabilities in XAI methods, raising concerns about security, reliability and trustworthiness, particularly in sensitive areas like healthcare and autonomous systems. Awareness of the potential risks associated with XAI is needed as its adoption increases, driven in part by the need to enhance compliance to regulations. This survey provides a holistic perspective on the security and safety landscape surrounding XAI, categorizing research on adversarial attacks against XAI and the misuse of explainability to enhance attacks on AI systems, such as evasion and privacy breaches. Our contribution includes identifying current insecurities in XAI and outlining future research directions in adversarial XAI. This work serves as an accessible foundation and outlook to recognize potential research gaps and define future directions. It identifies data modalities, such as time-series or graph data, and XAI methods that have not been extensively investigated for vulnerabilities in current research.

Cite as

Elisabeth Pachl, Fabian Langer, Thora Markert, and Jeanette Miriam Lorenz. A View on Vulnerabilites: The Security Challenges of XAI (Academic Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 12:1-12:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{pachl_et_al:OASIcs.SAIA.2024.12,
  author =	{Pachl, Elisabeth and Langer, Fabian and Markert, Thora and Lorenz, Jeanette Miriam},
  title =	{{A View on Vulnerabilites: The Security Challenges of XAI}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{12:1--12:23},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.12},
  URN =		{urn:nbn:de:0030-drops-227523},
  doi =		{10.4230/OASIcs.SAIA.2024.12},
  annote =	{Keywords: Explainability, XAI, Transparency, Adversarial Machine Learning, Security, Vulnerabilities}
}
Document
Engineering Fast Algorithms for the Bottleneck Matching Problem

Authors: Ioannis Panagiotas, Grégoire Pichon, Somesh Singh, and Bora Uçar

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
We investigate the maximum bottleneck matching problem in bipartite graphs. Given a bipartite graph with nonnegative edge weights, the problem is to find a maximum cardinality matching in which the minimum weight of an edge is the maximum. To the best of our knowledge, there are two widely used solvers for this problem based on two different approaches. There exists a third known approach in the literature, which seems inferior to those two which is presumably why there is no implementation of it. We take this third approach, make theoretical observations to improve its behavior, and implement the improved method. Experiments with the existing two solvers show that their run time can be too high to be useful in many interesting cases. Furthermore, their performance is not predictable, and slight perturbations of the input graph lead to considerable changes in the run time. On the other hand, the proposed solver’s performance is much more stable; it is almost always faster than or comparable to the two existing solvers, and its run time always remains low.

Cite as

Ioannis Panagiotas, Grégoire Pichon, Somesh Singh, and Bora Uçar. Engineering Fast Algorithms for the Bottleneck Matching Problem. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 87:1-87:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{panagiotas_et_al:LIPIcs.ESA.2023.87,
  author =	{Panagiotas, Ioannis and Pichon, Gr\'{e}goire and Singh, Somesh and U\c{c}ar, Bora},
  title =	{{Engineering Fast Algorithms for the Bottleneck Matching Problem}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{87:1--87:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.87},
  URN =		{urn:nbn:de:0030-drops-187406},
  doi =		{10.4230/LIPIcs.ESA.2023.87},
  annote =	{Keywords: bipartite graphs, assignment problem, matching}
}
  • Refine by Type
  • 3 Document/PDF
  • 2 Document/HTML

  • Refine by Publication Year
  • 2 2025
  • 1 2023

  • Refine by Author
  • 1 Chaudhuri, Syomantak
  • 1 Courtade, Thomas A.
  • 1 Langer, Fabian
  • 1 Lorenz, Jeanette Miriam
  • 1 Markert, Thora
  • Show More...

  • Refine by Series/Journal
  • 2 LIPIcs
  • 1 OASIcs

  • Refine by Classification
  • 1 Computing methodologies → Machine learning
  • 1 Security and privacy
  • 1 Theory of computation

  • Refine by Keyword
  • 1 Adversarial Machine Learning
  • 1 Correlations in Privacy
  • 1 Differential Privacy
  • 1 Explainability
  • 1 Heterogeneous Privacy
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail