Search Results

Documents authored by Błasiok, Jarosław


Document
Loss Minimization Yields Multicalibration for Large Neural Networks

Authors: Jarosław Błasiok, Parikshit Gopalan, Lunjia Hu, Adam Tauman Kalai, and Preetum Nakkiran

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Multicalibration is a notion of fairness for predictors that requires them to provide calibrated predictions across a large set of protected groups. Multicalibration is known to be a distinct goal than loss minimization, even for simple predictors such as linear functions. In this work, we consider the setting where the protected groups can be represented by neural networks of size k, and the predictors are neural networks of size n > k. We show that minimizing the squared loss over all neural nets of size n implies multicalibration for all but a bounded number of unlucky values of n. We also give evidence that our bound on the number of unlucky values is tight, given our proof technique. Previously, results of the flavor that loss minimization yields multicalibration were known only for predictors that were near the ground truth, hence were rather limited in applicability. Unlike these, our results rely on the expressivity of neural nets and utilize the representation of the predictor.

Cite as

Jarosław Błasiok, Parikshit Gopalan, Lunjia Hu, Adam Tauman Kalai, and Preetum Nakkiran. Loss Minimization Yields Multicalibration for Large Neural Networks. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 17:1-17:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{blasiok_et_al:LIPIcs.ITCS.2024.17,
  author =	{B{\l}asiok, Jaros{\l}aw and Gopalan, Parikshit and Hu, Lunjia and Kalai, Adam Tauman and Nakkiran, Preetum},
  title =	{{Loss Minimization Yields Multicalibration for Large Neural Networks}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{17:1--17:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.17},
  URN =		{urn:nbn:de:0030-drops-195452},
  doi =		{10.4230/LIPIcs.ITCS.2024.17},
  annote =	{Keywords: Multi-group fairness, loss minimization, neural networks}
}
Document
Matrix Multiplication and Number on the Forehead Communication

Authors: Josh Alman and Jarosław Błasiok

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
Three-player Number On the Forehead communication may be thought of as a three-player Number In the Hand promise model, in which each player is given the inputs that are supposedly on the other two players' heads, and promised that they are consistent with the inputs of the other players. The set of all allowed inputs under this promise may be thought of as an order-3 tensor. We surprisingly observe that this tensor is exactly the matrix multiplication tensor, which is widely studied in the design of fast matrix multiplication algorithms. Using this connection, we prove a number of results about both Number On the Forehead communication and matrix multiplication, each by using known results or techniques about the other. For example, we show how the Laser method, a key technique used to design the best matrix multiplication algorithms, can also be used to design communication protocols for a variety of problems. We also show how known lower bounds for Number On the Forehead communication can be used to bound properties of the matrix multiplication tensor such as its zeroing out subrank. Finally, we substantially generalize known methods based on slice-rank for studying communication, and show how they directly relate to the matrix multiplication exponent ω.

Cite as

Josh Alman and Jarosław Błasiok. Matrix Multiplication and Number on the Forehead Communication. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 16:1-16:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.CCC.2023.16,
  author =	{Alman, Josh and B{\l}asiok, Jaros{\l}aw},
  title =	{{Matrix Multiplication and Number on the Forehead Communication}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{16:1--16:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.16},
  URN =		{urn:nbn:de:0030-drops-182861},
  doi =		{10.4230/LIPIcs.CCC.2023.16},
  annote =	{Keywords: Number on the forehead, communication complexity, matrix multiplication}
}
Document
Communication Complexity of Inner Product in Symmetric Normed Spaces

Authors: Alexandr Andoni, Jarosław Błasiok, and Arnold Filtser

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
We introduce and study the communication complexity of computing the inner product of two vectors, where the input is restricted w.r.t. a norm N on the space ℝⁿ. Here, Alice and Bob hold two vectors v,u such that ‖v‖_N ≤ 1 and ‖u‖_{N^*} ≤ 1, where N^* is the dual norm. The goal is to compute their inner product ⟨v,u⟩ up to an ε additive term. The problem is denoted by IP_N, and generalizes important previously studied problems, such as: (1) Computing the expectation 𝔼_{x∼𝒟}[f(x)] when Alice holds 𝒟 and Bob holds f is equivalent to IP_{𝓁₁}. (2) Computing v^TAv where Alice has a symmetric matrix with bounded operator norm (denoted S_∞) and Bob has a vector v where ‖v‖₂ = 1. This problem is complete for quantum communication complexity and is equivalent to IP_{S_∞}. We systematically study IP_N, showing the following results, near tight in most cases: 1) For any symmetric norm N, given ‖v‖_N ≤ 1 and ‖u‖_{N^*} ≤ 1 there is a randomized protocol using 𝒪̃(ε^{-6} log n) bits of communication that returns a value in ⟨u,v⟩±ε with probability 2/3 - we will denote this by ℛ_{ε,1/3}(IP_N) ≤ 𝒪̃(ε^{-6} log n). In a special case where N = 𝓁_p and N^* = 𝓁_q for p^{-1} + q^{-1} = 1, we obtain an improved bound ℛ_{ε,1/3}(IP_{𝓁_p}) ≤ 𝒪(ε^{-2} log n), nearly matching the lower bound ℛ_{ε, 1/3}(IP_{𝓁_p}) ≥ Ω(min(n, ε^{-2})). 2) One way communication complexity ℛ^{→}_{ε,δ}(IP_{𝓁_p}) ≤ 𝒪(ε^{-max(2,p)}⋅ log n/ε), and a nearly matching lower bound ℛ^{→}_{ε, 1/3}(IP_{𝓁_p}) ≥ Ω(ε^{-max(2,p)}) for ε^{-max(2,p)} ≪ n. 3) One way communication complexity ℛ^{→}_{ε,δ}(N) for a symmetric norm N is governed by the distortion of the embedding 𝓁_∞^k into N. Specifically, while a small distortion embedding easily implies a lower bound Ω(k), we show that, conversely, non-existence of such an embedding implies protocol with communication k^𝒪(log log k) log² n. 4) For arbitrary origin symmetric convex polytope P, we show ℛ_{ε,1/3}(IP_{N}) ≤ 𝒪(ε^{-2} log xc(P)), where N is the unique norm for which P is a unit ball, and xc(P) is the extension complexity of P (i.e. the smallest number of inequalities describing some polytope P' s.t. P is projection of P').

Cite as

Alexandr Andoni, Jarosław Błasiok, and Arnold Filtser. Communication Complexity of Inner Product in Symmetric Normed Spaces. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 4:1-4:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{andoni_et_al:LIPIcs.ITCS.2023.4,
  author =	{Andoni, Alexandr and B{\l}asiok, Jaros{\l}aw and Filtser, Arnold},
  title =	{{Communication Complexity of Inner Product in Symmetric Normed Spaces}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{4:1--4:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.4},
  URN =		{urn:nbn:de:0030-drops-175077},
  doi =		{10.4230/LIPIcs.ITCS.2023.4},
  annote =	{Keywords: communication complexity, symmetric norms}
}
Document
RANDOM
Fourier Growth of Structured 𝔽₂-Polynomials and Applications

Authors: Jarosław Błasiok, Peter Ivanov, Yaonan Jin, Chin Ho Lee, Rocco A. Servedio, and Emanuele Viola

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
We analyze the Fourier growth, i.e. the L₁ Fourier weight at level k (denoted L_{1,k}), of various well-studied classes of "structured" m F₂-polynomials. This study is motivated by applications in pseudorandomness, in particular recent results and conjectures due to [Chattopadhyay et al., 2019; Chattopadhyay et al., 2019; Eshan Chattopadhyay et al., 2020] which show that upper bounds on Fourier growth (even at level k = 2) give unconditional pseudorandom generators. Our main structural results on Fourier growth are as follows: - We show that any symmetric degree-d m F₂-polynomial p has L_{1,k}(p) ≤ Pr [p = 1] ⋅ O(d)^k. This quadratically strengthens an earlier bound that was implicit in [Omer Reingold et al., 2013]. - We show that any read-Δ degree-d m F₂-polynomial p has L_{1,k}(p) ≤ Pr [p = 1] ⋅ (k Δ d)^{O(k)}. - We establish a composition theorem which gives L_{1,k} bounds on disjoint compositions of functions that are closed under restrictions and admit L_{1,k} bounds. Finally, we apply the above structural results to obtain new unconditional pseudorandom generators and new correlation bounds for various classes of m F₂-polynomials.

Cite as

Jarosław Błasiok, Peter Ivanov, Yaonan Jin, Chin Ho Lee, Rocco A. Servedio, and Emanuele Viola. Fourier Growth of Structured 𝔽₂-Polynomials and Applications. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 53:1-53:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{blasiok_et_al:LIPIcs.APPROX/RANDOM.2021.53,
  author =	{B{\l}asiok, Jaros{\l}aw and Ivanov, Peter and Jin, Yaonan and Lee, Chin Ho and Servedio, Rocco A. and Viola, Emanuele},
  title =	{{Fourier Growth of Structured \mathbb{F}₂-Polynomials and Applications}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{53:1--53:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.53},
  URN =		{urn:nbn:de:0030-drops-147462},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.53},
  annote =	{Keywords: Fourier analysis, Pseudorandomness, Fourier growth}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail