The Dependent Doors Problem: An Investigation into Sequential Decisions without Feedback

Authors Amos Korman, Yoav Rodeh

Thumbnail PDF


  • Filesize: 0.52 MB
  • 13 pages

Document Identifiers

Author Details

Amos Korman
Yoav Rodeh

Cite AsGet BibTex

Amos Korman and Yoav Rodeh. The Dependent Doors Problem: An Investigation into Sequential Decisions without Feedback. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 81:1-81:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


We introduce the dependent doors problem as an abstraction for situations in which one must perform a sequence of possibly dependent decisions, without receiving feedback information on the effectiveness of previously made actions. Informally, the problem considers a set of d doors that are initially closed, and the aim is to open all of them as fast as possible. To open a door, the algorithm knocks on it and it might open or not according to some probability distribution. This distribution may depend on which other doors are currently open, as well as on which other doors were open during each of the previous knocks on that door. The algorithm aims to minimize the expected time until all doors open. Crucially, it must act at any time without knowing whether or which other doors have already opened. In this work, we focus on scenarios where dependencies between doors are both positively correlated and acyclic. The fundamental distribution of a door describes the probability it opens in the best of conditions (with respect to other doors being open or closed). We show that if in two configurations of d doors corresponding doors share the same fundamental distribution, then these configurations have the same optimal running time up to a universal constant, no matter what are the dependencies between doors and what are the distributions. We also identify algorithms that are optimal up to a universal constant factor. For the case in which all doors share the same fundamental distribution we additionally provide a simpler algorithm, and a formula to calculate its running time. We furthermore analyse the price of lacking feedback for several configurations governed by standard fundamental distributions. In particular, we show that the price is logarithmic in d for memoryless doors, but can potentially grow to be linear in d for other distributions. We then turn our attention to investigate precise bounds. Even for the case of two doors, identifying the optimal sequence is an intriguing combinatorial question. Here, we study the case of two cascading memoryless doors. That is, the first door opens on each knock independently with probability p_1. The second door can only open if the first door is open, in which case it will open on each knock independently with probability p_2. We solve this problem almost completely by identifying algorithms that are optimal up to an additive term of 1.
  • No Feedback
  • Sequential Decisions
  • Probabilistic Environment
  • Exploration and Exploitation
  • Golden Ratio


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. Xiaohui Bei, Ning Chen, and Shengyu Zhang. On the complexity of trial and error. In Symposium on Theory of Computing Conference, STOC'13, Palo Alto, CA, USA, June 1-4, 2013, pages 31-40, 2013. URL:
  2. David E. Bell. Regret in decision making under uncertainty. Operations Research, 30(5):961-981, 1982. URL:
  3. Michael Ben-Or and Avinatan Hassidim. The bayesian learner is optimal for noisy binary search (and pretty good for quantum as well). In 49th Annual IEEE Symposium on Foundations of Computer Science, FOCS, 2008, October 25-28, 2008, Philadelphia, PA, USA, pages 221-230, 2008. URL:
  4. Lucas Boczkowski, Amos Korman, and Yoav Rodeh. Searching on trees with noisy memory. CoRR, abs/1611.01403, 2016. URL:
  5. Matthias Brand, Christian Laier, Mirko Pawlikowski, and Hans J. Markowitsch. Decision making with and without feedback: The role of intelligence, strategies, executive functions, and cognitive styles. Journal of Clinical and Experimental Neuropsychology, 31(8):984-998, 2009. PMID: 19358007. URL:
  6. Ehsan Emamjomeh-Zadeh, David Kempe, and Vikrant Singhal. Deterministic and probabilistic binary search in graphs. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 519-532, 2016. URL:
  7. Uriel Feige, Prabhakar Raghavan, David Peleg, and Eli Upfal. Computing with noisy information. SIAM J. Comput., 23(5):1001-1018, October 1994. URL:
  8. L. A. Giraldeau and T. Caraco. Social Foraging Theory. Monographs in behavior and ecology. Princeton University Press, 2000. Google Scholar
  9. Richard M. Karp and Robert Kleinberg. Noisy binary search and its applications. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA'07, pages 881-890, Philadelphia, PA, USA, 2007. Society for Industrial and Applied Mathematics. URL:
  10. Michael N. Katehakis and Arthur F. Veinott, Jr. The multi-armed bandit problem: Decomposition and computation. Math. Oper. Res., 12(2):262-268, May 1987. URL:
  11. Michael J. Kearns and Umesh V. Vazirani. An Introduction to Computational Learning Theory. MIT Press, Cambridge, MA, USA, 1994. Google Scholar
  12. Amos Korman, Jean-Sébastien Sereni, and Laurent Viennot. Toward more localized local algorithms: removing assumptions concerning global knowledge. Distributed Computing, 26(5-6):289-308, 2013. URL:
  13. Michael Luby. A simple parallel algorithm for the maximal independent set problem. SIAM J. Comput., 15(4):1036-1053, 1986. URL:
  14. Thomas M. Mitchell. Machine Learning. McGraw-Hill, Inc., New York, NY, USA, 1 edition, 1997. Google Scholar
  15. Douglas C. Montgomery. Design and Analysis of Experiments. John Wiley &Sons, 2006. Google Scholar
  16. Andrzej Pelc. Searching games with errors - fifty years of coping with liars. Theor. Comput. Sci., 270(1-2):71-109, 2002. URL:
  17. Richard S. Sutton and Andrew G. Barto. Introduction to Reinforcement Learning. MIT Press, Cambridge, MA, USA, 1st edition, 1998. Google Scholar