78 Search Results for "Blum, Avrim"


Volume

LIPIcs, Volume 124

10th Innovations in Theoretical Computer Science Conference (ITCS 2019)

ITCS 2019, January 10-12, 2019, San Diego, California, USA

Editors: Avrim Blum

Document
Winning Without Observing Payoffs: Exploiting Behavioral Biases to Win Nearly Every Round

Authors: Avrim Blum and Melissa Dutz

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Gameplay under various forms of uncertainty has been widely studied. Feldman et al. [Michal Feldman et al., 2010] studied a particularly low-information setting in which one observes the opponent’s actions but no payoffs, not even one’s own, and introduced an algorithm which guarantees one’s payoff nonetheless approaches the minimax optimal value (i.e., zero) in a symmetric zero-sum game. Against an opponent playing a minimax-optimal strategy, approaching the value of the game is the best one can hope to guarantee. However, a wealth of research in behavioral economics shows that people often do not make perfectly rational, optimal decisions. Here we consider whether it is possible to actually win in this setting if the opponent is behaviorally biased. We model several deterministic, biased opponents and show that even without knowing the game matrix in advance or observing any payoffs, it is possible to take advantage of each bias in order to win nearly every round (so long as the game has the property that each action beats and is beaten by at least one other action). We also provide a partial characterization of the kinds of biased strategies that can be exploited to win nearly every round, and provide algorithms for beating some kinds of biased strategies even when we don't know which strategy the opponent uses.

Cite as

Avrim Blum and Melissa Dutz. Winning Without Observing Payoffs: Exploiting Behavioral Biases to Win Nearly Every Round. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{blum_et_al:LIPIcs.ITCS.2024.18,
  author =	{Blum, Avrim and Dutz, Melissa},
  title =	{{Winning Without Observing Payoffs: Exploiting Behavioral Biases to Win Nearly Every Round}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.18},
  URN =		{urn:nbn:de:0030-drops-195463},
  doi =		{10.4230/LIPIcs.ITCS.2024.18},
  annote =	{Keywords: Game theory, Behavioral bias}
}
Document
Setting Fair Incentives to Maximize Improvement

Authors: Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, and Keziah Naggita

Published in: LIPIcs, Volume 256, 4th Symposium on Foundations of Responsible Computing (FORC 2023)


Abstract
We consider the problem of helping agents improve by setting goals. Given a set of target skill levels, we assume each agent will try to improve from their initial skill level to the closest target level within reach (or do nothing if no target level is within reach). We consider two models: the common improvement capacity model, where agents have the same limit on how much they can improve, and the individualized improvement capacity model, where agents have individualized limits. Our goal is to optimize the target levels for social welfare and fairness objectives, where social welfare is defined as the total amount of improvement, and we consider fairness objectives when the agents belong to different underlying populations. We prove algorithmic, learning, and structural results for each model. A key technical challenge of this problem is the non-monotonicity of social welfare in the set of target levels, i.e., adding a new target level may decrease the total amount of improvement; agents who previously tried hard to reach a distant target now have a closer target to reach and hence improve less. This especially presents a challenge when considering multiple groups because optimizing target levels in isolation for each group and outputting the union may result in arbitrarily low improvement for a group, failing the fairness objective. Considering these properties, we provide algorithms for optimal and near-optimal improvement for both social welfare and fairness objectives. These algorithmic results work for both the common and individualized improvement capacity models. Furthermore, despite the non-monotonicity property and interference of the target levels, we show a placement of target levels exists that is approximately optimal for the social welfare of each group. Unlike the algorithmic results, this structural statement only holds in the common improvement capacity model, and we illustrate counterexamples of this result in the individualized improvement capacity model. Finally, we extend our algorithms to learning settings where we have only sample access to the initial skill levels of agents.

Cite as

Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, and Keziah Naggita. Setting Fair Incentives to Maximize Improvement. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 5:1-5:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{ahmadi_et_al:LIPIcs.FORC.2023.5,
  author =	{Ahmadi, Saba and Beyhaghi, Hedyeh and Blum, Avrim and Naggita, Keziah},
  title =	{{Setting Fair Incentives to Maximize Improvement}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{5:1--5:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.5},
  URN =		{urn:nbn:de:0030-drops-179261},
  doi =		{10.4230/LIPIcs.FORC.2023.5},
  annote =	{Keywords: Algorithmic Fairness, Learning for Strategic Behavior, Incentivizing Improvement}
}
Document
On Classification of Strategic Agents Who Can Both Game and Improve

Authors: Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, and Keziah Naggita

Published in: LIPIcs, Volume 218, 3rd Symposium on Foundations of Responsible Computing (FORC 2022)


Abstract
In this work, we consider classification of agents who can both game and improve. For example, people wishing to get a loan may be able to take some actions that increase their perceived credit-worthiness and others that also increase their true credit-worthiness. A decision-maker would like to define a classification rule with few false-positives (does not give out many bad loans) while yielding many true positives (giving out many good loans), which includes encouraging agents to improve to become true positives if possible. We consider two models for this problem, a general discrete model and a linear model, and prove algorithmic, learning, and hardness results for each. For the general discrete model, we give an efficient algorithm for the problem of maximizing the number of true positives subject to no false positives, and show how to extend this to a partial-information learning setting. We also show hardness for the problem of maximizing the number of true positives subject to a nonzero bound on the number of false positives, and that this hardness holds even for a finite-point version of our linear model. We also show that maximizing the number of true positives subject to no false positive is NP-hard in our full linear model. We additionally provide an algorithm that determines whether there exists a linear classifier that classifies all agents accurately and causes all improvable agents to become qualified, and give additional results for low-dimensional data.

Cite as

Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, and Keziah Naggita. On Classification of Strategic Agents Who Can Both Game and Improve. In 3rd Symposium on Foundations of Responsible Computing (FORC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 218, pp. 3:1-3:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ahmadi_et_al:LIPIcs.FORC.2022.3,
  author =	{Ahmadi, Saba and Beyhaghi, Hedyeh and Blum, Avrim and Naggita, Keziah},
  title =	{{On Classification of Strategic Agents Who Can Both Game and Improve}},
  booktitle =	{3rd Symposium on Foundations of Responsible Computing (FORC 2022)},
  pages =	{3:1--3:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-226-6},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{218},
  editor =	{Celis, L. Elisa},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2022.3},
  URN =		{urn:nbn:de:0030-drops-165269},
  doi =		{10.4230/LIPIcs.FORC.2022.3},
  annote =	{Keywords: Strategic Classification, Social Welfare, Learning}
}
Document
Recovering from Biased Data: Can Fairness Constraints Improve Accuracy?

Authors: Avrim Blum and Kevin Stangl

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
Multiple fairness constraints have been proposed in the literature, motivated by a range of concerns about how demographic groups might be treated unfairly by machine learning classifiers. In this work we consider a different motivation; learning from biased training data. We posit several ways in which training data may be biased, including having a more noisy or negatively biased labeling process on members of a disadvantaged group, or a decreased prevalence of positive or negative examples from the disadvantaged group, or both. Given such biased training data, Empirical Risk Minimization (ERM) may produce a classifier that not only is biased but also has suboptimal accuracy on the true data distribution. We examine the ability of fairness-constrained ERM to correct this problem. In particular, we find that the Equal Opportunity fairness constraint [Hardt et al., 2016] combined with ERM will provably recover the Bayes optimal classifier under a range of bias models. We also consider other recovery methods including re-weighting the training data, Equalized Odds, and Demographic Parity, and Calibration. These theoretical results provide additional motivation for considering fairness interventions even if an actor cares primarily about accuracy.

Cite as

Avrim Blum and Kevin Stangl. Recovering from Biased Data: Can Fairness Constraints Improve Accuracy?. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 3:1-3:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{blum_et_al:LIPIcs.FORC.2020.3,
  author =	{Blum, Avrim and Stangl, Kevin},
  title =	{{Recovering from Biased Data: Can Fairness Constraints Improve Accuracy?}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{3:1--3:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020.3},
  URN =		{urn:nbn:de:0030-drops-120192},
  doi =		{10.4230/LIPIcs.FORC.2020.3},
  annote =	{Keywords: fairness in machine learning, equal opportunity, bias, machine learning}
}
Document
Advancing Subgroup Fairness via Sleeping Experts

Authors: Avrim Blum and Thodoris Lykouris

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
We study methods for improving fairness to subgroups in settings with overlapping populations and sequential predictions. Classical notions of fairness focus on the balance of some property across different populations. However, in many applications the goal of the different groups is not to be predicted equally but rather to be predicted well. We demonstrate that the task of satisfying this guarantee for multiple overlapping groups is not straightforward and show that for the simple objective of unweighted average of false negative and false positive rate, satisfying this for overlapping populations can be statistically impossible even when we are provided predictors that perform well separately on each subgroup. On the positive side, we show that when individuals are equally important to the different groups they belong to, this goal is achievable; to do so, we draw a connection to the sleeping experts literature in online learning. Motivated by the one-sided feedback in natural settings of interest, we extend our results to such a feedback model. We also provide a game-theoretic interpretation of our results, examining the incentives of participants to join the system and to provide the system full information about predictors they may possess. We end with several interesting open problems concerning the strength of guarantees that can be achieved in a computationally efficient manner.

Cite as

Avrim Blum and Thodoris Lykouris. Advancing Subgroup Fairness via Sleeping Experts. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 55:1-55:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{blum_et_al:LIPIcs.ITCS.2020.55,
  author =	{Blum, Avrim and Lykouris, Thodoris},
  title =	{{Advancing Subgroup Fairness via Sleeping Experts}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{55:1--55:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.55},
  URN =		{urn:nbn:de:0030-drops-117402},
  doi =		{10.4230/LIPIcs.ITCS.2020.55},
  annote =	{Keywords: Online learning, Fairness, Game theory}
}
Document
Bilu-Linial Stability, Certified Algorithms and the Independent Set Problem

Authors: Haris Angelidakis, Pranjal Awasthi, Avrim Blum, Vaggos Chatziafratis, and Chen Dan

Published in: LIPIcs, Volume 144, 27th Annual European Symposium on Algorithms (ESA 2019)


Abstract
We study the classic Maximum Independent Set problem under the notion of stability introduced by Bilu and Linial (2010): a weighted instance of Independent Set is gamma-stable if it has a unique optimal solution that remains the unique optimal solution under multiplicative perturbations of the weights by a factor of at most gamma >= 1. The goal then is to efficiently recover this "pronounced" optimal solution exactly. In this work, we solve stable instances of Independent Set on several classes of graphs: we improve upon previous results by solving O~(Delta/sqrt(log Delta))-stable instances on graphs of maximum degree Delta, (k - 1)-stable instances on k-colorable graphs and (1 + epsilon)-stable instances on planar graphs (for any fixed epsilon > 0), using both combinatorial techniques as well as LPs and the Sherali-Adams hierarchy. For general graphs, we present a strong lower bound showing that there are no efficient algorithms for O(n^(1/2 - epsilon))-stable instances of Independent Set, assuming the planted clique conjecture. To complement our negative result, we give an algorithm for (epsilon n)-stable instances, for any fixed epsilon > 0. As a by-product of our techniques, we give algorithms as well as lower bounds for stable instances of Node Multiway Cut (a generalization of Edge Multiway Cut), by exploiting its connections to Vertex Cover. Furthermore, we prove a general structural result showing that the integrality gap of convex relaxations of several maximization problems reduces dramatically on stable instances. Moreover, we initiate the study of certified algorithms for Independent Set. The notion of a gamma-certified algorithm was introduced very recently by Makarychev and Makarychev (2018) and it is a class of gamma-approximation algorithms that satisfy one crucial property: the solution returned is optimal for a perturbation of the original instance, where perturbations are again multiplicative up to a factor of gamma >= 1 (hence, such algorithms not only solve gamma-stable instances optimally, but also have guarantees even on unstable instances). Here, we obtain Delta-certified algorithms for Independent Set on graphs of maximum degree Delta, and (1+epsilon)-certified algorithms on planar graphs. Finally, we analyze the algorithm of Berman and Fürer (1994) and prove that it is a ((Delta + 1)/3 + epsilon)-certified algorithm for Independent Set on graphs of maximum degree Delta where all weights are equal to 1.

Cite as

Haris Angelidakis, Pranjal Awasthi, Avrim Blum, Vaggos Chatziafratis, and Chen Dan. Bilu-Linial Stability, Certified Algorithms and the Independent Set Problem. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 7:1-7:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{angelidakis_et_al:LIPIcs.ESA.2019.7,
  author =	{Angelidakis, Haris and Awasthi, Pranjal and Blum, Avrim and Chatziafratis, Vaggos and Dan, Chen},
  title =	{{Bilu-Linial Stability, Certified Algorithms and the Independent Set Problem}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{7:1--7:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.7},
  URN =		{urn:nbn:de:0030-drops-111288},
  doi =		{10.4230/LIPIcs.ESA.2019.7},
  annote =	{Keywords: Bilu-Linial stability, perturbation resilience, beyond worst-case analysis, Independent Set, Vertex Cover, Multiway Cut}
}
Document
Complete Volume
LIPIcs, Volume 124, ITCS'19, Complete Volume

Authors: Avrim Blum

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
LIPIcs, Volume 124, ITCS'19, Complete Volume

Cite as

10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Proceedings{blum:LIPIcs.ITCS.2019,
  title =	{{LIPIcs, Volume 124, ITCS'19, Complete Volume}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019},
  URN =		{urn:nbn:de:0030-drops-101704},
  doi =		{10.4230/LIPIcs.ITCS.2019},
  annote =	{Keywords: Theory of computation, Mathematics of computing}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Avrim Blum

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 0:i-0:xii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{blum:LIPIcs.ITCS.2019.0,
  author =	{Blum, Avrim},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{0:i--0:xii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.0},
  URN =		{urn:nbn:de:0030-drops-100937},
  doi =		{10.4230/LIPIcs.ITCS.2019.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Submodular Secretary Problem with Shortlists

Authors: Shipra Agrawal, Mohammad Shadravan, and Cliff Stein

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
In submodular k-secretary problem, the goal is to select k items in a randomly ordered input so as to maximize the expected value of a given monotone submodular function on the set of selected items. In this paper, we introduce a relaxation of this problem, which we refer to as submodular k-secretary problem with shortlists. In the proposed problem setting, the algorithm is allowed to choose more than k items as part of a shortlist. Then, after seeing the entire input, the algorithm can choose a subset of size k from the bigger set of items in the shortlist. We are interested in understanding to what extent this relaxation can improve the achievable competitive ratio for the submodular k-secretary problem. In particular, using an O(k) sized shortlist, can an online algorithm achieve a competitive ratio close to the best achievable offline approximation factor for this problem? We answer this question affirmatively by giving a polynomial time algorithm that achieves a 1-1/e-epsilon-O(k^{-1}) competitive ratio for any constant epsilon>0, using a shortlist of size eta_epsilon(k)=O(k). This is especially surprising considering that the best known competitive ratio (in polynomial time) for the submodular k-secretary problem is (1/e-O(k^{-1/2}))(1-1/e) [Thomas Kesselheim and Andreas Tönnis, 2017]. The proposed algorithm also has significant implications for another important problem of submodular function maximization under random order streaming model and k-cardinality constraint. We show that our algorithm can be implemented in the streaming setting using a memory buffer of size eta_epsilon(k)=O(k) to achieve a 1-1/e-epsilon-O(k^{-1}) approximation. This result substantially improves upon [Norouzi-Fard et al., 2018], which achieved the previously best known approximation factor of 1/2 + 8 x 10^{-14} using O(k log k) memory; and closely matches the known upper bound for this problem [McGregor and Vu, 2017].

Cite as

Shipra Agrawal, Mohammad Shadravan, and Cliff Stein. Submodular Secretary Problem with Shortlists. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 1:1-1:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{agrawal_et_al:LIPIcs.ITCS.2019.1,
  author =	{Agrawal, Shipra and Shadravan, Mohammad and Stein, Cliff},
  title =	{{Submodular Secretary Problem with Shortlists}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{1:1--1:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.1},
  URN =		{urn:nbn:de:0030-drops-100949},
  doi =		{10.4230/LIPIcs.ITCS.2019.1},
  annote =	{Keywords: Submodular Optimization, Secretary Problem, Streaming Algorithms}
}
Document
Hamiltonian Sparsification and Gap-Simulation

Authors: Dorit Aharonov and Leo Zhou

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
Analog quantum simulation - simulation of one Hamiltonian by another - is one of the major goals in the noisy intermediate-scale quantum computation (NISQ) era, and has many applications in quantum complexity. We initiate the rigorous study of the physical resources required for such simulations, where we focus on the task of Hamiltonian sparsification. The goal is to find a simulating Hamiltonian H~ whose underlying interaction graph has bounded degree (this is called degree-reduction) or much fewer edges than that of the original Hamiltonian H (this is called dilution). We set this study in a relaxed framework for analog simulations that we call gap-simulation, where H~ is only required to simulate the groundstate(s) and spectral gap of H instead of its full spectrum, and we believe it is of independent interest. Our main result is a proof that in stark contrast to the classical setting, general degree-reduction is impossible in the quantum world, even under our relaxed notion of gap-simulation. The impossibility proof relies on devising counterexample Hamiltonians and applying a strengthened variant of Hastings-Koma decay of correlations theorem. We also show a complementary result where degree-reduction is possible when the strength of interactions is allowed to grow polynomially. Furthermore, we prove the impossibility of the related sparsification task of generic Hamiltonian dilution, under a computational hardness assumption. We also clarify the (currently weak) implications of our results to the question of quantum PCP. Our work provides basic answers to many of the "first questions" one would ask about Hamiltonian sparsification and gap-simulation; we hope this serves as a good starting point for future research of these topics.

Cite as

Dorit Aharonov and Leo Zhou. Hamiltonian Sparsification and Gap-Simulation. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 2:1-2:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{aharonov_et_al:LIPIcs.ITCS.2019.2,
  author =	{Aharonov, Dorit and Zhou, Leo},
  title =	{{Hamiltonian Sparsification and Gap-Simulation}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{2:1--2:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.2},
  URN =		{urn:nbn:de:0030-drops-100956},
  doi =		{10.4230/LIPIcs.ITCS.2019.2},
  annote =	{Keywords: quantum simulation, quantum Hamiltonian complexity, sparsification, quantum PCP}
}
Document
On Solving Linear Systems in Sublinear Time

Authors: Alexandr Andoni, Robert Krauthgamer, and Yosef Pogrow

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
We study sublinear algorithms that solve linear systems locally. In the classical version of this problem the input is a matrix S in R^{n x n} and a vector b in R^n in the range of S, and the goal is to output x in R^n satisfying Sx=b. For the case when the matrix S is symmetric diagonally dominant (SDD), the breakthrough algorithm of Spielman and Teng [STOC 2004] approximately solves this problem in near-linear time (in the input size which is the number of non-zeros in S), and subsequent papers have further simplified, improved, and generalized the algorithms for this setting. Here we focus on computing one (or a few) coordinates of x, which potentially allows for sublinear algorithms. Formally, given an index u in [n] together with S and b as above, the goal is to output an approximation x^_u for x^*_u, where x^* is a fixed solution to Sx=b. Our results show that there is a qualitative gap between SDD matrices and the more general class of positive semidefinite (PSD) matrices. For SDD matrices, we develop an algorithm that approximates a single coordinate x_{u} in time that is polylogarithmic in n, provided that S is sparse and has a small condition number (e.g., Laplacian of an expander graph). The approximation guarantee is additive | x^_u-x^*_u | <=epsilon | x^* |_infty for accuracy parameter epsilon>0. We further prove that the condition-number assumption is necessary and tight. In contrast to the SDD matrices, we prove that for certain PSD matrices S, the running time must be at least polynomial in n (for the same additive approximation), even if S has bounded sparsity and condition number.

Cite as

Alexandr Andoni, Robert Krauthgamer, and Yosef Pogrow. On Solving Linear Systems in Sublinear Time. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 3:1-3:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{andoni_et_al:LIPIcs.ITCS.2019.3,
  author =	{Andoni, Alexandr and Krauthgamer, Robert and Pogrow, Yosef},
  title =	{{On Solving Linear Systems in Sublinear Time}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{3:1--3:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.3},
  URN =		{urn:nbn:de:0030-drops-100966},
  doi =		{10.4230/LIPIcs.ITCS.2019.3},
  annote =	{Keywords: Linear systems, Laplacian solver, Sublinear time, Randomized linear algebra}
}
Document
Placing Conditional Disclosure of Secrets in the Communication Complexity Universe

Authors: Benny Applebaum and Prashant Nalini Vasudevan

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
In the conditional disclosure of secrets (CDS) problem (Gertner et al., J. Comput. Syst. Sci., 2000) Alice and Bob, who hold n-bit inputs x and y respectively, wish to release a common secret z to Carol (who knows both x and y) if and only if the input (x,y) satisfies some predefined predicate f. Alice and Bob are allowed to send a single message to Carol which may depend on their inputs and some shared randomness, and the goal is to minimize the communication complexity while providing information-theoretic security. Despite the growing interest in this model, very few lower-bounds are known. In this paper, we relate the CDS complexity of a predicate f to its communication complexity under various communication games. For several basic predicates our results yield tight, or almost tight, lower-bounds of Omega(n) or Omega(n^{1-epsilon}), providing an exponential improvement over previous logarithmic lower-bounds. We also define new communication complexity classes that correspond to different variants of the CDS model and study the relations between them and their complements. Notably, we show that allowing for imperfect correctness can significantly reduce communication - a seemingly new phenomenon in the context of information-theoretic cryptography. Finally, our results show that proving explicit super-logarithmic lower-bounds for imperfect CDS protocols is a necessary step towards proving explicit lower-bounds against the class AM, or even AM cap coAM - a well known open problem in the theory of communication complexity. Thus imperfect CDS forms a new minimal class which is placed just beyond the boundaries of the "civilized" part of the communication complexity world for which explicit lower-bounds are known.

Cite as

Benny Applebaum and Prashant Nalini Vasudevan. Placing Conditional Disclosure of Secrets in the Communication Complexity Universe. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 4:1-4:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{applebaum_et_al:LIPIcs.ITCS.2019.4,
  author =	{Applebaum, Benny and Vasudevan, Prashant Nalini},
  title =	{{Placing Conditional Disclosure of Secrets in the Communication Complexity Universe}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{4:1--4:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.4},
  URN =		{urn:nbn:de:0030-drops-100976},
  doi =		{10.4230/LIPIcs.ITCS.2019.4},
  annote =	{Keywords: Conditional Disclosure of Secrets, Information-Theoretic Security}
}
Document
Bitcoin: A Natural Oligopoly

Authors: Nick Arnosti and S. Matthew Weinberg

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
Although Bitcoin was intended to be a decentralized digital currency, in practice, mining power is quite concentrated. This fact is a persistent source of concern for the Bitcoin community. We provide an explanation using a simple model to capture miners' incentives to invest in equipment. In our model, n miners compete for a prize of fixed size. Each miner chooses an investment q_i, incurring cost c_i q_i, and then receives reward (q_i^alpha)/(sum_j q_j^alpha), for some alpha >= 1. When c_i = c_j for all i,j, and alpha = 1, there is a unique equilibrium where all miners invest equally. However, we prove that under seemingly mild deviations from this model, equilibrium outcomes become drastically more centralized. In particular, - When costs are asymmetric, if miner i chooses to invest, then miner j has market share at least 1-c_j/c_i. That is, if miner j has costs that are (e.g.) 20% lower than those of miner i, then miner j must control at least 20% of the total mining power. - In the presence of economies of scale (alpha > 1), every market participant has a market share of at least 1-1/(alpha), implying that the market features at most alpha/(alpha - 1) miners in total. We discuss the implications of our results for the future design of cryptocurrencies. In particular, our work further motivates the study of protocols that minimize "orphaned" blocks, proof-of-stake protocols, and incentive compatible protocols.

Cite as

Nick Arnosti and S. Matthew Weinberg. Bitcoin: A Natural Oligopoly. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, p. 5:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{arnosti_et_al:LIPIcs.ITCS.2019.5,
  author =	{Arnosti, Nick and Weinberg, S. Matthew},
  title =	{{Bitcoin: A Natural Oligopoly}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{5:1--5:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.5},
  URN =		{urn:nbn:de:0030-drops-100989},
  doi =		{10.4230/LIPIcs.ITCS.2019.5},
  annote =	{Keywords: Bitcoin, Cryptocurrencies, Rent-Seeking Competition}
}
Document
A Simple Sublinear-Time Algorithm for Counting Arbitrary Subgraphs via Edge Sampling

Authors: Sepehr Assadi, Michael Kapralov, and Sanjeev Khanna

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
In the subgraph counting problem, we are given a (large) input graph G(V, E) and a (small) target graph H (e.g., a triangle); the goal is to estimate the number of occurrences of H in G. Our focus here is on designing sublinear-time algorithms for approximately computing number of occurrences of H in G in the setting where the algorithm is given query access to G. This problem has been studied in several recent papers which primarily focused on specific families of graphs H such as triangles, cliques, and stars. However, not much is known about approximate counting of arbitrary graphs H in the literature. This is in sharp contrast to the closely related subgraph enumeration problem that has received significant attention in the database community as the database join problem. The AGM bound shows that the maximum number of occurrences of any arbitrary subgraph H in a graph G with m edges is O(m^{rho(H)}), where rho(H) is the fractional edge-cover of H, and enumeration algorithms with matching runtime are known for any H. We bridge this gap between subgraph counting and subgraph enumeration by designing a simple sublinear-time algorithm that can estimate the number of occurrences of any arbitrary graph H in G, denoted by #H, to within a (1 +/- epsilon)-approximation with high probability in O(m^{rho(H)}/#H) * poly(log(n),1/epsilon) time. Our algorithm is allowed the standard set of queries for general graphs, namely degree queries, pair queries and neighbor queries, plus an additional edge-sample query that returns an edge chosen uniformly at random. The performance of our algorithm matches those of Eden et al. [FOCS 2015, STOC 2018] for counting triangles and cliques and extend them to all choices of subgraph H under the additional assumption of edge-sample queries.

Cite as

Sepehr Assadi, Michael Kapralov, and Sanjeev Khanna. A Simple Sublinear-Time Algorithm for Counting Arbitrary Subgraphs via Edge Sampling. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 6:1-6:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{assadi_et_al:LIPIcs.ITCS.2019.6,
  author =	{Assadi, Sepehr and Kapralov, Michael and Khanna, Sanjeev},
  title =	{{A Simple Sublinear-Time Algorithm for Counting Arbitrary Subgraphs via Edge Sampling}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{6:1--6:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.6},
  URN =		{urn:nbn:de:0030-drops-100996},
  doi =		{10.4230/LIPIcs.ITCS.2019.6},
  annote =	{Keywords: Sublinear-time algorithms, Subgraph counting, AGM bound}
}
  • Refine by Author
  • 10 Blum, Avrim
  • 2 Ahmadi, Saba
  • 2 Beyhaghi, Hedyeh
  • 2 Dinur, Irit
  • 2 Goldreich, Oded
  • Show More...

  • Refine by Classification
  • 7 Theory of computation → Circuit complexity
  • 6 Theory of computation → Streaming, sublinear and near linear time algorithms
  • 5 Theory of computation → Communication complexity
  • 5 Theory of computation → Computational complexity and cryptography
  • 4 Theory of computation → Algorithmic game theory
  • Show More...

  • Refine by Keyword
  • 4 Property Testing
  • 3 Game theory
  • 3 lower bounds
  • 2 Monotonicity Testing
  • 2 SAT
  • Show More...

  • Refine by Type
  • 77 document
  • 1 volume

  • Refine by Publication Year
  • 70 2019
  • 2 2018
  • 2 2020
  • 1 2007
  • 1 2022
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail