Privacy-Computation Trade-Offs in Private Repetition and Metaselection
Abstract
A Private Repetition algorithm takes as input a differentially private algorithm with constant success probability and boosts it to one that succeeds with high probability. These algorithms are closely related to private metaselection algorithms that compete with the best of many private algorithms, and private hyperparameter tuning algorithms that compete with the best hyperparameter settings for a private learning algorithm. Existing algorithms for these tasks pay either a large overhead in privacy cost, or a large overhead in computational cost. In this work, we show strong lower bounds for problems of this kind, showing in particular that for any algorithm that preserves the privacy cost up to a constant factor, the failure probability can only fall polynomially in the computational overhead. This is in stark contrast with the non-private setting, where the failure probability falls exponentially in the computational overhead. By carefully combining existing algorithms for metaselection, we prove computation-privacy tradeoffs that nearly match our lower bounds.
Keywords and phrases:
Differential Privacy, Hyperparameter Tuning, Metaselection2012 ACM Subject Classification:
Security and privacy ; Theory of computation Theory and algorithms for application domainsEditors:
Mark BunSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
Randomized algorithms have probabilistic guarantees. Often one designs an algorithm that succeeds (for an appropriate notion of success) with constant probability. A standard approach to boosting the success probability of a randomized algorithm is to run the algorithm with fresh randomness multiple times and take the best of the results of individual runs. Standard tail inequalities then allow for proving bounds on the success probability of the resulting algorithm. Suppose that an algorithm produces an output with quality score at least with probability at least .111Without loss of generality, we assume our goal is to maximize a quality score; our results can be translated immediately to a setting where want to minimize a quality score. The constant can be replaced by any other constant. As an example, the algorithm of [13] for finding a maximum cut in the graph finds a cut with value within a small constant factor of the optimum with constant probability. Let be the algorithm that runs times to get outputs and releases the with the highest quality score. The basic repetition theorem says that produces an output with quality score at least except with probability . An important aspect of this result is the relationship between the failure probability and the number of repetitions: logarithmically many repetitions suffice to make the failure probability polynomially small. In the example above, repeating the maximum cut algorithm 20 times ensures that the best cut amongst these is approximately optimal except with probability . Various versions of repetition theorems have been studied in literature, e.g. improving on the amount of randomness needed [19], or allowing for parallelizability in case of multiple round protocols [27, 18].
In this work, we are interested in this question when the algorithm is differentially private (see Section 2 for a precise definition). We assume we have a differentially private algorithm that outputs an object (e.g. an ML model) and a score (e.g. approximate accuracy on a test dataset), where the score is “high” with constant probability. Private Repetition theorems allow for boosting the success probability, while controlling the privacy cost. A simple repetition theorem uses the above. To get failure probability below , it would suffice to set . This increases the privacy parameters (e.g. the in -DP) of the algorithm by a factor of which may be undesirable. Liu and Talwar [20] designed and analyzed a different algorithm for private repetition that we will refer to as for the rest of introduction. The algorithm allows for a constant () increase in the privacy cost, while allowing an arbitrarily small failure probability . However, it requires running the algorithm about times, instead of the logarithmic dependence on in the naive repetition theorem.
In other words, existing private repetition theorems either pay a overhead in the privacy cost, or a overhead in the run time. A similar dichotomy exists for private hyperparameter selection, where one wants to privately select the best model amongst those resulting from different hyperparameter choices. Indeed private hyperparameter tuning is one of the standard uses of private repetition theorems [20, 26] and the algorithm has been used in practice recently by Israel’s Ministry of Health in their release of 2014 live births microdata [17]. The private learning algorithms for a single hyperparameter setting can can be computationally expensive to run. As an example, in Hod and Canetti [17], the algorithm can involve training a CTGAN [33, 34, 28] using DPSGD [1] or PATE [24, 25] for a certain set of hyperparameters. In such cases, the run time overhead can be quite significant.
As we show in Section 4, the two algorithms above can be combined to give a smooth trade-off between the privacy cost overhead and the computational overhead. It is natural to ask however if such a trade-off is necessary: is there a way to keep the privacy overhead to constant while paying only logarithmically in the run time?
In this work, we give a negative answer to this question. We show that for any algorithm with constant privacy overhead, the run time must necessarily be polynomial in . More generally, our lower bound shows that the trade-off between the computational overhead (denoted by below) and the privacy overhead (denoted by below) is nearly tight.
Theorem (Informal version of Theorem 9).
Let be an oracle algorithm that satisfies the following properties:
-
Privacy: For any -DP mechanism , is -DP.
-
Utility: For any input , let be such that . Then satisfies .
-
Runtime: on input makes at most calls to (on any inputs) in expectation.
Then for , , or equivalently, .
These results are shown pictorially in Figure 1 (left). The two green filled dots denote the trade-offs achieved by and . The blue dashed line is the upper bound achieved by combining these algorithms (Section 4). The grey shaded region can be excluded by straightforward arguments. The lower bound above excludes the red striped region. In particular, it shows that for any constant , must be at least . At the other extreme, any that is polylogarithmic in requires . These results hold even when the target value is public.
We extend our results to the setting where the target value is achieved by the mechanism with a probability that is different from . More generally, in the hyperparameter tuning setting, there are possible settings of hyperparameters, and our utility bound asks that the algorithm be competitive with the median value of the best hyperparameter setting.
Theorem (Informal version of Theorem 10).
Let be an oracle algorithm that satisfies the following properties:
-
Privacy: For any -DP mechanism , is -DP.
-
Utility: For any input , let be such that . Then .
-
Runtime: on input makes at most calls to (on any inputs) in expectation.
Then for , , or equivalently, .
We show these results in Figure 1 (right). As before, the two green filled dots denote the trade-offs achieved by and . Note the here is significantly worse, and the better upper bounds is achieved by combining with . As before the grey shaded region is excluded by straightforward arguments, and the new lower bound exclude the red striped region. In particular, it shows that for any constant , must be at least . At the other extreme, making polylogarithmic in requires . As before, the lower bound holds for known target value .
This has strong implications for private hyperparameter tuning. Indeed our results imply that any private hyperparameter tuning algorithm that boosts the success probability to while paying only a constant overhead in privacy cost must pay a computational overhead.
We remark that both the upper bounds translate -DP algorithms to -DP algorithms, and -DP algorithms to -DP algorithms; similar results hold for Renyi DP algorithms as well [26]. Our lower bounds apply even when the input algorithm is -DP, and the output algorithm is allowed to be -DP (or Renyi DP).
Other Related Work.
Gupta et al. [14] first proved a repetition theorem for the case when the target threshold is known and the value is a low-sensitivity function. Their result required for repetition and for metaselection. As discussed, the random stopping approach of [20] improves these to and without any restrictions. Subsequently, [26] studied these questions under Renyi Differential privacy, and showed positive results for a variety of distributions of stopping times.
There is also a beautiful line of work on meta-algorithms that run multiple private algorithms iteratively, and only pay for privacy cost of those that meet a certain critrea. The Sparse Vector Technique [10, 29, 15, 32] is the simplest version of such a meta-algorithm, where one only pays privacy cost for (numerical, bounded-sensitivity) queries whose answer is above a threshold. This paradigm was significantly expanded by Cohen and Lyu [6] who show a similar result for general target sets, under appropriate assumptions on the algorithms.
Decision vs. Optimization.
Repetition theorems in complexity theory are normally stated for decision problems. This is justified by the fact that one can reduce optimization problems to decision problems with logarithmic (in the range) overhead. However a logarithmic overhead in privacy cost would be quite significant as one typically wants to be a small constant. For this reason, all the above works [14, 20, 26] directly address optimization problems. As decision problems can be reduced to optimization, our results also apply to decision problems.
The need for Private Repetition.
Private algorithms come in many flavors. In many cases, the algorithm itself or a slight modification can be better analyzed to make its failure probability small, at a small cost to the utility criteria. For example, the Laplace mechanism [9] and the exponential mechanism [21] can directly give utility bounds that hold with probability for any . In these cases, we do not need private repetition algorithms that use a base algorithm as an oracle. However, for many other private algorithms, we do no have such sliding scale guarantees and can only control the expected utility, or median utility. This is often the case when the randomization plays a role in “non-private” part of the algorithm, e.g. in the use of tree embeddings [14, 12], hashing [3] or other more custom analyses [14, 7]. Finally, for some problems, (private) algorithms work well in practice but may not have theoretical guarantees (or work much better than their theoretical guarantees) and one does not have a knob to tune the failure probability. For these two kinds of algorithms, private repetition theorems are an invaluable tool for reducing the failure probability.
Metaselection and hyperparameter tuning algorithms are even more broadly useful. It is fairly common to have algorithms that take a hyperparameter as input and work well either provably or empirically. As an example, algorithms in the Propose-test-release framework [8] work well when the proposal is true for the distribution. In cases when the proposal can be parameterized (e.g. in [5]), it becomes a hyperparameter. Practical private optimization algorithms (e.g. [1]) typically involve multiple hyperparameters. Often the hyperparameters cannot be well-tuned on public proxy datasets, and can be a source of leakage of private information [26]. Private Hyperparameter tuning algorithms are practically used in such settings and the computational cost can be a significant concern.
2 Preliminaries
Differential Privacy [9] constrains the distribution of outcomes on neighboring datasets to be close. The Hamming distance between two datasets is the number of entries in which they differ. For our purposes, two datasets are neighboring if they differ in one entry, i.e. if their Hamming distance is 1.
Definition 1.
A randomized algorithm is -differentially private (DP) if for any pair of neighboring datasets and any ,
We abbreviate -DP as -DP and refer to it as pure differential privacy.
Sometimes it is convenient to define a mechanism on a subset of inputs, and establish its privacy properties on this restricted subset. The following extension lemma due to Borgs et al. [4] shows that any such partial specification can be extended to the set of all datasets at a small increase in the privacy cost.
Proposition 2 (Extension Lemma).
Let be an -differentially private algorithm restricted to so that for any , and any , . Then there exists a randomized algorithm defined on the whole input space which is -differentially private and satisfies that for every , has the same distribution as .
We will be interested in private mechanisms that output a tuple where is some arbitrary output space. For example, if the mechanism is computing an approximate maximum cut in a graph, could be a subset of vertices of a graph and the (approximate) number of edges in the input graph that leave this subset . In the case of hyperparameter tuning, would be a model and an estimate of its accuracy. We denote by the real value when . For a mechanism , we let denote the median of when . A private amplification algorithm takes as input a dataset and a failure probability , and given oracle access to a mechanism , outputs an such that with probability . Here this probability is taken over both the internal randomness of the amplification algorithm as well as the randomness in . A private amplification algorithm may need to make oracle calls to , and may be -DP whenever is -DP. Our goal is to understand the trade-off between the computational overhead and the privacy overhead .
We will assume that can only output an tuple that is produced by a run of on some input. This is a natural restriction in all settings, and is needed to make the problem meaningful.222E.g., a trivial algorithm that computes and outputs would not be useful in any application. Thus in this paper, we will always make the assumption that the oracle algorithm selects a tuple from the outputs it receives from calls to .
A private metaselection algorithm operates on a set of private algorithms , where each . It takes as input a dataset and a failure probability , and given oracle access to mechanisms , outputs an such that with probability . As above, the probability is taken over both the internal randomness of the metaselection algorithm as well as the randomness in ’s. The metaselection algorithm may need to make oracle calls to the ’s, and may be -DP whenever each is -DP. Our goal as before is to understand the trade-off between the computational overhead and the privacy overhead . Note that hyperparameter tuning can be phrased as metaselection, by treating the private training algorithm for each setting of hyperaprameters as a separate algorithm amongst options. More generally, we can phrase metaselection and hyperparameter tuning as special cases of repetition, where the target value is the th quantile of the distribution, instead of the median. This variant can handle the case when the set of hyperparameter settings may be large (or even unbounded) but we expect a non-trivial measure of the hyperparameter settings to be good.
Proposition 3.
Let and be an oracle algorithm that satisfies the following properties:
-
Privacy: For any -DP mechanism , is -DP.
-
Utility: For any input , let be such that . Then satisfies .
-
Runtime: on input makes at most calls to (on any inputs) in expectation.
Let be a hyperparameter space, and let be a measure on . Then there is an oracle algorithm that satisfies the following properties:
-
Privacy: For any -DP mechanism , is -DP.
-
Utility: For any input , let be such that . Then .
-
Runtime: on input makes at most calls to (on any inputs) in expectation.
In particular, when and is uniform, then competes with the largest median value and makes calls to .
Proof.
Define that on input , samples a random and runs . Running with gives the first result. The second result uses the fact that the output of when and is uniform is at least the largest median with probability at least .
Differential privacy constrains the likelihood of events on neighboring datasets. This also implies constraints for datasets at some bounded distance. The following forms of these constraints will be useful. The proofs of the following consequences of group privacy [11] are elementary.
Proposition 4.
Let satisfy -DP. Then for datasets such that , and any event ,
In particular, if , then .
Proposition 5.
Let satisfy -DP. Then for data sets such that , and any event ,
In particular, if and , then .
A similar statement holds for Renyi differential privacy.
Proposition 6.
Let satisfy -Renyi DP for . Then for data sets such that , and any event ,
In particular, for , we get
It follows that for , if , then .
Proof.
Let be a sequence of datasets where and are adjacent. A consequence of -RDP [22, Prop. 10] is that for any event ,
This implies the base case () of the claim that for all ,
Suppose that the claim holds for . We now inductively prove it for . We write
This completes the proof of the first part. For the second part, we use the following two facts. Firstly, for , , so that the exponent of is at least (when ). Secondly, the geometric series . The third part follows immediately by rearrangement.
3 Main Lower Bound
Theorem 7.
Let be an oracle algorithm that satisfies the following properties:
-
Privacy: For any -DP mechanism , is -DP.
-
Utility: For any input , .
-
Runtime: on input makes at most calls to on input in expectation for some .
Then for , , or equivalently, .
Proof.
With some foresight, we set , and set . Consider datasets in where , and . Now we define a mechanism such that
Here and are two arbitrary distinct elements of . Note that this specification on satisfies -DP as . By the extension lemma (Proposition 2), this partial mechanism can be extended to a -DP mechanism on .
Consider a run of . Let be the event that on input makes at most calls to . By Markov’s inequality, . Conditioned on this event , the probability that is the output of any of the at most runs of is at most . Since has never seen a output in this case, it follows that , so that . It follows that .
As is -DP, the privacy property of implies that it is -DP. By Proposition 5, it follows that for ,
By the utility property, the left hand side is at most . Since , it follows that . Rearranging, we get the claimed result. By using Proposition 6 in lieu of Proposition 5, we get a similar result for Renyi DP. We omit the straightforward proof, and similar corollaries of Theorem 9 and Theorem 10.
Corollary 8.
In Theorem 7, if we replace the Privacy condition by
-
Privacy (RDP): For any -DP mechanism , is -RDP.
Then if , , or equivalently, .
3.1 Allowing more general meta-algorithms
One of the assumptions in Theorem 7 is that when is run on input , its oracle calls to are also on input . A more general oracle algorithm may, given input , run on additional inputs derived from . We next show that the lower bound continues to hold, up to constants. At a high level, the lower bound in Theorem 7 comes from the inability of on input to see any sample that is unlikely in but likely in . We show how to embed such an instance in a mechanism over a slightly larger dataset, in a way where on input , finding an output that is good for is still difficult.
Theorem 9.
Let be an oracle algorithm that satisfies the following properties:
-
Privacy: For any -DP mechanism , is -DP.
-
Utility: For any input , .
-
Runtime: on input makes at most calls to (on any inputs) in expectation for some .
Then for , , , or equivalently, .
Proof.
With some foresight, we set , and set . Fix a vector , and let be the radius Hamming ball centered at . We now define a mechanism such that:
As before, and are two arbitrary distinct elements of . Note that this specification on satisfies -DP as for . By the extension lemma (Proposition 2), this partial mechanism can be extended to a -DP mechanism on .
Consider a run of where . Let be the event that on input makes at most calls to on any inputs. By Markov’s inequality, . For chosen uniformly at random, the probability that a single dataset chosen by lies in is given by
Thus the likelihood of seeing an on any given call to , when is chosen at random is
Here and in the rest of the proof, the probability is taken over both the choice of and the randomness of and . Conditioned on the event , the probability that is the output of any of the at most runs of is at most . Since has never seen an output in this case, it follows that , so that . It follows that .
The rest of the proof is now essentially identical to the earlier proof. As is -DP, the privacy property of implies that it is -DP. By Proposition 5, it follows that for ,
By the utility property, the left hand side is at most . Since , it follows that . Rearranging, we get the claimed result.
3.2 Lower Bounds for Hyperparameter Tuning
We next show that the same essential arguments can be used to show a lower bound for private hyperparameter tuning.
Theorem 10.
Let be an oracle algorithm that satisfies the following properties:
-
Privacy: For any -DP mechanism , is -DP.
-
Utility: For any input , .
-
Runtime: on input makes at most calls to (on any inputs) in expectation for some .
Then for , , , or equivalently, .
Proof.
We set , and set . For a vector , and let be the radius Hamming ball centered at . For a parameter setting , we now define a mechanism such that:
Here, and are two arbitrary distinct elements of . By the extension lemma (Proposition 2), for each , the partial mechanism can be extended to a -DP mechanism on .
Consider a run of where . Let be the event that on input makes at most calls to on any inputs. By Markov’s inequality, . Each call to made by consists of a single dataset and a single . When are chosen uniformly at random, the probability that is
If the chosen by is different from , the likelihood of seeing an is zero. Thus the likelihood of seeing an on a given call to is
Conditioned on the event , the probability that is the output of any of the at most runs of is at most . Since has never seen a output in this case, it follows that , so that . It follows that .
The rest of the proof is essentially identical. As is -DP for each , the privacy property of implies that it is -DP. By Proposition 5, it follows that for small enough .
By the utility property, the left hand side is at most . Since , it follows that . Rearranging, we get the claimed result.
4 Upper Bounds Trade-offs
As mentioned earlier, two extreme points on the trade-off between the privacy overhead and the run time overhead are known. We show next the trade-off that is achievable by a simple combination of the basic approaches. We consider the goal of matching the median score, that is the common case for private repetition (labeled Repetition below), as well as the goal of matching the th quantile, which is the case for metaselection and hyperparameter tuning (labeled Metaselection below). We first state the results one gets from simple repetition.
Theorem 11.
Suppose that is -DP. Then for an integer , the algorithm that on input runs times and releases the output with the highest quality score satisfies
-
Privacy: is -DP, as well as -DP, for any .
-
Runtime: on input makes exactly calls to .
-
Utility (Repetition): For any input , if , then
-
Utility (Metaselection): For any input , and , if , then
where the quantile is such that .
Proof.
The privacy follows from simple composition and advanced composition applied and post-processing. The utility and run time analyses are straightforward. The following result was shown by Liu and Talwar [20].
Theorem 12.
Suppose that is -DP and . Then the algorithm satisfies
-
Privacy: is -DP.
-
Runtime: on input makes in expectation calls to .
-
Utility (Repetition): For any input , .
-
Utility (Metaselection): For any input , and , the algorithm satisfies , where the quantile is such that .
These two algorithms can be combined, by using Theorem 12 to boost the success probability to , and then repeating that algorithm similarly to Theorem 11 to further boost the success probability to .
Theorem 13.
Suppose that is -DP and be an integer. Then the algorithm that runs (for ) with an oracle satisfies
-
Privacy: is -DP.
-
Runtime: on input makes in expectation calls to .
-
Utility: For any input , .
Proof.
By Theorem 12, is -DP. Theorem 11 now implies that is -DP. For the chosen parameters, the utility bound in Theorem 12 implies that . Since this algorithm is repeated times in , the failure probability gets reduced to . Finally, the run time bound follows by combining the run time results in Theorem 12 and Theorem 11. We note that for , so that this bound of is . Thus it matches the lower bound in Theorem 9 up to constant factors in .
An identical argument yields the following result for metaselection.
Theorem 14.
Suppose that is -DP and let be an integer. Then the algorithm that runs (for ) with an oracle satisfies
-
Privacy: is -DP.
-
Runtime: on input makes in expectation calls to .
-
Utility: For any input , , where the quantile is such that .
Proof.
By Theorem 12, is -DP. Theorem 11 now implies that is -DP. For the chosen parameters, the utility bound in Theorem 12 implies that . Since this algorithm is repeated times in , the failure probability gets reduced to . Finally, the run time bound follows by combining the run time results in Theorem 12 and Theorem 11. This result improves on in terms of privacy cost and gives the points plotted on Figure 1(right). As in the case of repetition, this result is tight up to constants in all the way up to .
5 Conclusions
Differentially private repetition is a fundamental problem in the design of differentially private algorithms, and private hyperparameter tuning is of great practical importance. Our work shows new trade-offs between privacy overhead and computational overhead, and our main result is a lower bound showing that there is no general algorithm that can do significantly better. Indeed we show that for constant privacy overhead, the computational overhead must be polynomial in for some private algorithms.
It is natural to ask if there are reasonable restrictions one can place on the inputs or the private algorithms of interest, for which better repetition theorems and/or hyperparameter tuning algorithms are possible. Such beyond-worst-case results are not uncommon in differential privacy [23, 8, 31, 2]. There are some assumptions that hyperparameter tuning algorithms either implicitly [30] or explicitly [16] make in the non-private setting and one may ask if such assumptions help with better trade-offs for private hyperparameter search. Even absent computational constraints, the or privacy cost overhead in can be large. Once again, while it is unavoidable in the worst-case, one may hope to design algorithms that do better for non-worst-case instances. We leave these important questions for future work.
References
- [1] Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Conference on Computer and Communications Security, pages 308–318, 2016. doi:10.1145/2976749.2978318.
- [2] Hilal Asi and John C Duchi. Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 14106–14117. Curran Associates, Inc., 2020. URL: https://proceedings.neurips.cc/paper_files/paper/2020/file/a267f936e54d7c10a2bb70dbe6ad7a89-Paper.pdf.
- [3] Raef Bassily, Kobbi Nissim, Uri Stemmer, and Abhradeep Thakurta. Practical locally private heavy hitters. Journal of Machine Learning Research, 21(16):1–42, 2020. URL: http://jmlr.org/papers/v21/18-786.html.
- [4] Christian Borgs, Jennifer Chayes, Adam Smith, and Ilias Zadik. Revealing network structure, confidentially: Improved rates for node-private graphon estimation. In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pages 533–543, 2018. doi:10.1109/FOCS.2018.00057.
- [5] Gavin Brown, Jonathan Hayase, Samuel Hopkins, Weihao Kong, Xiyang Liu, Sewoong Oh, Juan C Perdomo, and Adam Smith. Insufficient statistics perturbation: Stable estimators for private least squares extended abstract. In Shipra Agrawal and Aaron Roth, editors, Proceedings of Thirty Seventh Conference on Learning Theory, volume 247 of Proceedings of Machine Learning Research, pages 750–751. PMLR, 30 June–03 July 2024. URL: https://proceedings.mlr.press/v247/brown24b.html.
- [6] Edith Cohen and Xin Lyu. The target-charging technique for privacy analysis across interactive computations. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 62139–62168. Curran Associates, Inc., 2023. URL: https://proceedings.neurips.cc/paper_files/paper/2023/file/c3fe2a07ec47b89c50e89706d2e23358-Paper-Conference.pdf.
- [7] Michael Dinitz, Satyen Kale, Silvio Lattanzi, and Sergei Vassilvitskii. Almost tight bounds for differentially private densest subgraph. In Yossi Azar and Debmalya Panigrahi, editors, Proceedings of the 2025 Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2025, New Orleans, LA, USA, January 12-15, 2025, pages 2908–2950. SIAM, 2025. doi:10.1137/1.9781611978322.94.
- [8] Cynthia Dwork and Jing Lei. Differential privacy and robust statistics. In Proceedings of the Forty-First Annual ACM Symposium on Theory of Computing, STOC ’09, pages 371–380, New York, NY, USA, 2009. Association for Computing Machinery. doi:10.1145/1536414.1536466.
- [9] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. Journal of Privacy and Confidentiality, 7(3):17–51, 2017. doi:10.29012/jpc.v7i3.405.
- [10] Cynthia Dwork, Moni Naor, Omer Reingold, Guy N. Rothblum, and Salil Vadhan. On the complexity of differentially private data release: efficient algorithms and hardness results. In Proceedings of the Forty-First Annual ACM Symposium on Theory of Computing, STOC ’09, pages 381–390, New York, NY, USA, 2009. Association for Computing Machinery. doi:10.1145/1536414.1536467.
- [11] Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3 & 4):211–407, 2014. doi:10.1561/0400000042.
- [12] Vitaly Feldman, Audra McMillan, Satchit Sivakumar, and Kunal Talwar. Instance-optimal private density estimation in the wasserstein distance. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, editors, Advances in Neural Information Processing Systems, volume 37, pages 90061–90131. Curran Associates, Inc., 2024. URL: https://proceedings.neurips.cc/paper_files/paper/2024/file/a406c9f8eb70032a21110a4d86735ab9-Paper-Conference.pdf.
- [13] Michel X. Goemans and David P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM, 42(6):1115–1145, November 1995. doi:10.1145/227683.227684.
- [14] Anupam Gupta, Katrina Ligett, Frank McSherry, Aaron Roth, and Kunal Talwar. Differentially private combinatorial optimization. In Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’10, pages 1106–1125, USA, 2010. Society for Industrial and Applied Mathematics. doi:10.1137/1.9781611973075.90.
- [15] Moritz Hardt and Guy N. Rothblum. A multiplicative weights mechanism for privacy-preserving data analysis. In 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, pages 61–70, 2010. doi:10.1109/FOCS.2010.85.
- [16] Elad Hazan, Adam Klivans, and Yang Yuan. Hyperparameter optimization: a spectral approach. In International Conference on Learning Representations, 2018. URL: https://openreview.net/forum?id=H1zriGeCZ.
- [17] Shlomi Hod and Ran Canetti. Differentially Private Release of Israel’s National Registry of Live Births . In 2025 IEEE Symposium on Security and Privacy (SP), pages 100–100, Los Alamitos, CA, USA, May 2025. IEEE Computer Society. doi:10.1109/SP61157.2025.00101.
- [18] Thomas Holenstein. Parallel repetition: simplifications and the no-signaling case. In Proceedings of the Thirty-Ninth Annual ACM Symposium on Theory of Computing, STOC ’07, pages 411–419, New York, NY, USA, 2007. Association for Computing Machinery. doi:10.1145/1250790.1250852.
- [19] Shlomo Hoory, Nathan Linial, and Avi Wigderson. Expander graphs and their applications. Bulletin of the American Mathematical Society, 43:439–561, 2006.
- [20] Jingcheng Liu and Kunal Talwar. Private selection from private candidates. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, pages 298–309, New York, NY, USA, 2019. Association for Computing Machinery. doi:10.1145/3313276.3316377.
- [21] Frank McSherry and Kunal Talwar. Mechanism design via differential privacy. In Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science, FOCS ’07, pages 94–103, USA, 2007. IEEE Computer Society. doi:10.1109/FOCS.2007.41.
- [22] Ilya Mironov. Rényi differential privacy. In 2017 IEEE 30th Computer Security Foundations Symposium (CSF), pages 263–275, 2017. doi:10.1109/CSF.2017.11.
- [23] Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. Smooth sensitivity and sampling in private data analysis. In Proceedings of the Thirty-Ninth Annual ACM Symposium on Theory of Computing, STOC ’07, pages 75–84, New York, NY, USA, 2007. Association for Computing Machinery. doi:10.1145/1250790.1250803.
- [24] Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Semi-supervised knowledge transfer for deep learning from private training data. In International Conference on Learning Representations, 2017. URL: https://openreview.net/forum?id=HkwoSDPgg.
- [25] Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Ulfar Erlingsson. Scalable private learning with PATE. In International Conference on Learning Representations, 2018. URL: https://openreview.net/forum?id=rkZB1XbRZ.
- [26] Nicolas Papernot and Thomas Steinke. Hyperparameter tuning with renyi differential privacy. In International Conference on Learning Representations, 2022. URL: https://openreview.net/forum?id=-70L8lpp9DF.
- [27] Ran Raz. A parallel repetition theorem. In Proceedings of the Twenty-Seventh Annual ACM Symposium on Theory of Computing, STOC ’95, pages 447–456, New York, NY, USA, 1995. Association for Computing Machinery. doi:10.1145/225058.225181.
- [28] Lucas Rosenblatt, Xiaoyan Liu, Samira Pouyanfar, Eduardo de Leon, Anuj Desai, and Joshua Allen. Differentially private synthetic data: Applied evaluations and enhancements. arXiv preprint arXiv:2011.05537, 2020. arXiv:2011.05537.
- [29] Aaron Roth and Tim Roughgarden. Interactive privacy via the median mechanism. In Proceedings of the Forty-Second ACM Symposium on Theory of Computing, STOC ’10, pages 765–774, New York, NY, USA, 2010. Association for Computing Machinery. doi:10.1145/1806689.1806794.
- [30] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. In F. Pereira, C.J. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 25. Curran Associates, Inc., 2012. URL: https://proceedings.neurips.cc/paper_files/paper/2012/file/05311655a15b75fab86956663e1819cd-Paper.pdf.
- [31] Abhradeep Guha Thakurta and Adam Smith. Differentially private feature selection via stability arguments, and the robustness of the lasso. In Shai Shalev-Shwartz and Ingo Steinwart, editors, Proceedings of the 26th Annual Conference on Learning Theory, volume 30 of Proceedings of Machine Learning Research, pages 819–850, Princeton, NJ, USA, 12–14 June 2013. PMLR. URL: https://proceedings.mlr.press/v30/Guha13.html.
- [32] Salil Vadhan. The Complexity of Differential Privacy, pages 347–450. Springer, Yehuda Lindell, ed., 2017. doi:10.1007/978-3-319-57048-8_7.
- [33] Lei Xu, Maria Skoularidou, Alfredo Cuesta-Infante, and Kalyan Veeramachaneni. Modeling tabular data using conditional gan. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL: https://proceedings.neurips.cc/paper_files/paper/2019/file/254ed7d2de3b23ab10936522dd547b78-Paper.pdf.
- [34] Jinsung Yoon, James Jordon, and Mihaela van der Schaar. PATE-GAN: Generating synthetic data with differential privacy guarantees. In International Conference on Learning Representations, 2019. URL: https://openreview.net/forum?id=S1zk9iRqF7.
