Stronger Tradeoffs for Orthogonal Range Querying in the Semigroup Model

Authors Swaroop N. Prabhakar, Vikram Sharma

Thumbnail PDF


  • Filesize: 486 kB
  • 14 pages

Document Identifiers

Author Details

Swaroop N. Prabhakar
  • The Institute of Mathematical Sciences, HBNI, Chennai, 600113, India
Vikram Sharma
  • The Institute of Mathematical Sciences, HBNI, Chennai, 600113, India

Cite AsGet BibTex

Swaroop N. Prabhakar and Vikram Sharma. Stronger Tradeoffs for Orthogonal Range Querying in the Semigroup Model. In 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 122, pp. 45:1-45:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


In this paper, we focus on lower bounds for data structures supporting orthogonal range querying on m points in n-dimensions in the semigroup model. Such a data structure usually maintains a family of "canonical subsets" of the given set of points and on a range query, it outputs a disjoint union of the appropriate subsets. Fredman showed that in order to prove lower bounds in the semigroup model, it suffices to prove a lower bound on a certain combinatorial tradeoff between two parameters: (a) the total sizes of the canonical subsets, and (b) the total number of canonical subsets required to cover all query ranges. In particular, he showed that the arithmetic mean of these two parameters is Omega(m log^n m). We strengthen this tradeoff by showing that the geometric mean of the same two parameters is Omega(m log^n m). Our second result is an alternate proof of Fredman's tradeoff in the one dimensional setting. The problem of answering range queries using canonical subsets can be formulated as factoring a specific boolean matrix as a product of two boolean matrices, one representing the canonical sets and the other capturing the appropriate disjoint unions of the former to output all possible range queries. In this formulation, we can ask what is an optimal data structure, i.e., a data structure that minimizes the sum of the two parameters mentioned above, and how does the balanced binary search tree compare with this optimal data structure in the two parameters? The problem of finding an optimal data structure is a non-linear optimization problem. In one dimension, Fredman's result implies that the minimum value of the objective function is Omega(m log m), which means that at least one of the parameters has to be Omega(m log m). We show that both the parameters in an optimal solution have to be Omega(m log m). This implies that balanced binary search trees are near optimal data structures for range querying in one dimension. We derive intermediate results on factoring matrices, not necessarily boolean, while trying to minimize the norms of the factors, that may be of independent interest.

Subject Classification

ACM Subject Classification
  • Information systems → Multidimensional range search
  • range querying
  • lower bounds
  • matrix factorization
  • Lagrange dual function


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. Pankaj Agarwal. Range Searching. Technical report, Duke University, 2016. URL:
  2. Pankaj Agarwal and Jeff Erickson. Geometric Range Searching and its Relatives. Technical report, University of Illinois, 1997. URL:
  3. Jon Louis Bentley. Multidimensional Divide-and-Conquer. Commun. ACM, 23(4):214-229, April 1980. URL:
  4. Mark de Berg, Otfried Cheong, Marc van Kreveld, and Mark Overmars. Computational Geometry: Algorithms and Applications. Springer-Verlag TELOS, Santa Clara, CA, USA, 3rd ed. edition, 2008. Google Scholar
  5. Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, New York, NY, USA, 2004. Google Scholar
  6. Richard A. Brualdi and Dragos Cvetkovic. A Combinatorial Approach to Matrix Theory and Its Applications. Chapman and Hall/CRC, 1st edition, 2008. Google Scholar
  7. Bernard Chazelle and Leonidas J. Guibas. Fractional Cascading: A Data Structuring Technique. Algorithmica, 1(1):133-162, November 1986. URL:
  8. J. F. Elliott. The characteristic roots of certain real symmetric matrices. Master’s thesis, University of Tennessee, 1953. Google Scholar
  9. M. Fredman and M. Saks. The Cell Probe Complexity of Dynamic Data Structures. In Proceedings of the Twenty-first Annual ACM Symposium on Theory of Computing, STOC '89, pages 345-354, New York, NY, USA, 1989. ACM. URL:
  10. M. L. Fredman. The Inherent Complexity of Dynamic Data Structures which Accommodate Range Queries. In 21st Annual Symposium on Foundations of Computer Science (sfcs 1980), pages 191-199, October 1980. URL:
  11. Michael L. Fredman. A Lower Bound on the Complexity of Orthogonal Range Queries. J. ACM, 28(4):696-705, October 1981. Google Scholar
  12. Michael L. Fredman. The Complexity of Maintaining an Array and Computing its Partial Sums. J. ACM, 29(1):250-260, January 1982. URL:
  13. N. Higham. Functions of Matrices. Society for Industrial and Applied Mathematics, 2008. URL:
  14. G. S. Lueker. A Data Structure for Orthogonal Range Queries. In 19th Annual Symposium on Foundations of Computer Science (sfcs 1978), pages 28-34, October 1978. URL:
  15. Kurt Mehlhorn. Data Structures and Algorithms 3: Multi-dimensional Searching and Computational Geometry. Springer-Verlag New York, Inc., New York, NY, USA, 1984. Google Scholar
  16. Franco P. Preparata and Michael I. Shamos. Computational Geometry: An Introduction. Springer-Verlag, 1985. Google Scholar
  17. A. Toni. Lower Bounds on Zero–One Matrices. Linear Algebra and its Applications, 376:275-282, 2004. URL: