In a differentially private sequential learning setting, agents introduce endogenous noise into their public actions to limit information leakage about their private signals. The impact of this privacy noise varies depending on whether the signals are continuous or binary. For continuous signals and a finite privacy budget ε > 0, we propose a smooth randomized response mechanism that adapts the noise level based on the distance to a decision threshold, in contrast to the standard randomized response with uniform noise. This allows agents’ actions to better reflect both their private signals and public history, achieving an accelerated convergence rate of Θ_ε(log n), surpassing the rate of Θ(√{log n}) in the non-private regime. In this case, privacy noise helps to amplify the log-likelihood ratio over time, improving information aggregation. For binary signals, differential privacy consistently degrades learning performance by reducing the probability of correct cascades compared to the non-private baseline. In this case, agents tend to use a constant randomized response strategy before the information cascade occurs. This constant privacy noise reduces the informativeness of their actions and hinders effective learning until an information cascade occurs. However, even for binary signals, the probability of correct cascades does not vary monotonically with the privacy budget ε. There are values of ε where the probability of a correct cascade increases as the privacy budget decreases because the threshold for initiating an information cascade increases by one when the privacy budget crosses below those values.
@InProceedings{liu_et_al:LIPIcs.FORC.2025.18, author = {Liu, Yuxin and Rahimian, M. Amin}, title = {{Differentially Private Sequential Learning}}, booktitle = {6th Symposium on Foundations of Responsible Computing (FORC 2025)}, pages = {18:1--18:6}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-367-6}, ISSN = {1868-8969}, year = {2025}, volume = {329}, editor = {Bun, Mark}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.18}, URN = {urn:nbn:de:0030-drops-231450}, doi = {10.4230/LIPIcs.FORC.2025.18}, annote = {Keywords: Differential Privacy, Sequential Learning, Randomized Response, Learning Efficiency} }
Feedback for Dagstuhl Publishing