How to Send a Real Number Using a Single Bit (And Some Shared Randomness)

Authors Ran Ben Basat , Michael Mitzenmacher , Shay Vargaftik

Thumbnail PDF


  • Filesize: 0.88 MB
  • 20 pages

Document Identifiers

Author Details

Ran Ben Basat
  • University College London, UK
Michael Mitzenmacher
  • Harvard University, Cambridge, MA, USA
Shay Vargaftik
  • VMware Research, Herzliya, Israel


We thank the anonymous reviewers, Moshe Gabel, and Gal Mendelson for their helpful feedback and comments.

Cite AsGet BibTex

Ran Ben Basat, Michael Mitzenmacher, and Shay Vargaftik. How to Send a Real Number Using a Single Bit (And Some Shared Randomness). In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 25:1-25:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


We consider the fundamental problem of communicating an estimate of a real number x ∈ [0,1] using a single bit. A sender that knows x chooses a value X ∈ {0,1} to transmit. In turn, a receiver estimates x based on the value of X. The goal is to minimize the cost, defined as the worst-case (over the choice of x) expected squared error. We first overview common biased and unbiased estimation approaches and prove their optimality when no shared randomness is allowed. We then show how a small amount of shared randomness, which can be as low as a single bit, reduces the cost in both cases. Specifically, we derive lower bounds on the cost attainable by any algorithm with unrestricted use of shared randomness and propose optimal and near-optimal solutions that use a small number of shared random bits. Finally, we discuss open problems and future directions.

Subject Classification

ACM Subject Classification
  • Theory of computation → Rounding techniques
  • Theory of computation → Stochastic approximation
  • Randomized Algorithms
  • Approximation Algorithms
  • Shared Randomness
  • Distributed Protocols
  • Estimation
  • Subtractive Dithering


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. Afshin Abdi and Faramarz Fekri. Indirect Stochastic Gradient Quantization and Its Application in Distributed Deep Learning. In AAAI, 2020. Google Scholar
  2. Ran Ben Basat, Michael Mitzenmacher, and Shay Vargaftik. Biased [0,1] proof. URL:
  3. Ran Ben-Basat, Gil Einziger, Isaac Keslassy, Ariel Orda, Shay Vargaftik, and Erez Waisbard. Memento: Making Sliding Windows Efficient for Heavy Hitters. In ACM CoNEXT, 2018. Google Scholar
  4. Ran Ben-Basat, Michael Mitzenmacher, and Shay Vargaftik. How to Send a Real Number Using a Single Bit (and Some Shared Randomness). CoRR, abs/2010.02331, 2020. URL:
  5. Ran Ben Basat, Sivaramakrishnan Ramanathan, Yuliang Li, Gianni Antichi, Minian Yu, and Michael Mitzenmacher. PINT: Probabilistic In-band Network Telemetry. In Proceedings of the Annual conference of the ACM Special Interest Group on Data Communication on the applications, technologies, architectures, and protocols for computer communication, pages 662-680, 2020. Google Scholar
  6. Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli, and Animashree Anandkumar. signSGD: Compressed Optimisation for Non-Convex Problems. In International Conference on Machine Learning, pages 560-569, 2018. Google Scholar
  7. Zarathustra Elessar Brady ( Is Subtractive Dithering the Optimal Algorithm for Sending a Real Number Using One Bit? URL:
  8. MR Garey, David Johnson, and Hans Witsenhausen. The complexity of the generalized Lloyd-Max problem (Corresp.). IEEE Transactions on Information Theory, 28(2):255-256, 1982. Google Scholar
  9. Robert M. Gray and David L. Neuhoff. Quantization. IEEE transactions on information theory, 44(6):2325-2383, 1998. Google Scholar
  10. Allan Grønlund, Kasper Green Larsen, Alexander Mathiasen, Jesper Sindahl Nielsen, Stefan Schneider, and Mingzhou Song. Fast Exact K-Means, K-Medians and Bregman Divergence Clustering in 1D. arXiv preprint, 2017. URL:
  11. Rob Harrison, Qizhe Cai, Arpit Gupta, and Jennifer Rexford. Network-Wide Heavy Hitter Detection with Commodity Switches. In Proceedings of the Symposium on SDN Research, pages 1-7, 2018. Google Scholar
  12. Russell Impagliazzo and David Zuckerman. How to Recycle Random Bits. In FOCS, volume 30, pages 248-253, 1989. Google Scholar
  13. Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and Open Problems in Federated Learning, 2019. URL:
  14. Sai Praneeth Karimireddy, Quentin Rebjock, Sebastian Stich, and Martin Jaggi. Error Feedback Fixes SignSGD and other Gradient Compression Schemes. In International Conference on Machine Learning, pages 3252-3261, 2019. Google Scholar
  15. Jakub Konečny, Brendan McMahan, and Daniel Ramage. Federated Optimization: Distributed Optimization Beyond the Datacenter. arXiv preprint, 2015. URL:
  16. Michael Mitzenmacher. Queues with Small Advice. arXiv preprint, 2020. URL:
  17. Ilan Newman. Private vs. Common Random bits in Communication Complexity. Information processing letters, 39(2):67-71, 1991. Google Scholar
  18. Lawrence Roberts. Picture Coding Using Pseudo-Random Noise. IRE Transactions on Information Theory, 8(2):145-154, 1962. Google Scholar
  19. Leonard Schuchman. Dither Signals and Their Effect on Quantization Noise. IEEE Transactions on Communication Technology, 12(4):162-165, 1964. Google Scholar
  20. Frank Seide, Hao Fu, Jasha Droppo, Gang Li, and Dong Yu. 1-Bit Stochastic Gradient Descent and its Application to Data-Parallel Distributed Training of Speech DNNs. In Fifteenth Annual Conference of the International Speech Communication Association, 2014. Google Scholar
  21. Nir Shlezinger, Mingzhe Chen, Yonina C Eldar, H Vincent Poor, and Shuguang Cui. UVeQFed: Universal Vector Quantization for Federated Learning. arXiv preprint, 2020. URL:
  22. Shay Vargaftik, Isaac Keslassy, and Ariel Orda. LSQ: Load Balancing In Large-Scale Heterogeneous Systems With Multiple Dispatchers. IEEE/ACM Transactions on Networking, 28(3):1186-1198, 2020. Google Scholar
  23. Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Terngrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning. In Advances in neural information processing systems, pages 1509-1519, 2017. Google Scholar
  24. Andrew Chi-Chin Yao. Probabilistic Computations: Toward a Unified Measure of Complexity. In IEEE FOCS, 1977. Google Scholar
  25. Guoxia Yu, Tanya Vladimirova, and Martin N Sweeting. Image Compression Systems on Board Satellites. Acta Astronautica, 64(9-10):988-1005, 2009. Google Scholar
  26. Pengyu Zhang and Deepak Ganesan. Enabling Bit-by-Bit Backscatter Communication in Severe Energy Harvesting Environments. In 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI 14), pages 345-357, 2014. Google Scholar
Questions / Remarks / Feedback

Feedback for Dagstuhl Publishing

Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail