Hitting and Covering Affine Families of Convex Polyhedra, with Applications to Robust Optimization
Abstract
Geometric hitting set problems, in which we seek a smallest set of points that collectively hit a given set of ranges, are ubiquitous in computational geometry. Most often, the set is discrete and is given explicitly. We propose new variants of these problems, dealing with continuous families of convex polyhedra, and show that they capture decision versions of the two-level finite adaptability problem in robust optimization. We show that these problems can be solved in strongly polynomial time when the size of the hitting/covering set and the dimension of the polyhedra and the parameter space are constant. We also show that the hitting set problem can be solved in strongly quadratic time for one-parameter families of convex polyhedra in constant dimension. This leads to new tractability results for finite adaptability that are the first ones with so-called left-hand-side uncertainty, where the underlying problem is non-linear.
Keywords and phrases:
Geometric hitting set problem, Continuous families of polyhedra, Robust optimizationCopyright and License:
2012 ACM Subject Classification:
Theory of computation Computational geometry ; Theory of computation Problems, reductions and completenessAcknowledgements:
The authors thank Safia Kedad-Sidhoum, Anton Medvedev and Frédéric Meunier for helpful discussions on finite adaptability and Guillaume Moroz for helpful discussions on computer algebra.Editors:
Paweł Gawrychowski, Filip Mazowiecki, and Michał SkrzypczakSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
In this paper we present three contributions on hitting and covering problems for families of convex polyhedra. (Throughout the paper, all polyhedra considered are convex.) First, we introduce a new flavor of these problems, dealing with hitting or covering continuous families of polyhedra, and show that they capture decision versions of the two-level finite adaptability problem in robust optimization. Next, we show that general methods from quantifier elimination yield strongly polynomial time algorithms for these problems when certain parameters are constant. Last, we present a fixed-parameter algorithm for a special case of our continuous hitting set problem.
1.1 Hitting affine families of polyhedra
An affinely parameterized family of polyhedra in , or an affine family of polyhedra for short, is a continuous family of polyhedra in defined by a domain and two affine maps and via
| (1) |
Observe that an affine family of polyhedra has three defining parameters: the dimension of the ambient space of the polyhedra, the dimension of the parameter space , and the number of constraints defining each polyhedron. When every polyhedron is bounded, we call an affine family of polytopes.
We call a set a hitting set for an affine family of polyhedra if intersects every member of that family. See Figure 1 for some examples. We adopt the following important convention: if an affine family of polyhedra has one empty member, that is if for some , then has no hitting set. We first consider the following computational problem:
Given an integer and an affine family of polyhedra, does that family admit a hitting set of size ?
1.2 Context: robust optimization and finite adaptability
This continuous hitting set problem arises as the decision version of a special case of the finite adaptability problem in robust optimization [6]. This line of research in mathematical programming deals with uncertainty in planning by modeling the decision to make as the optimization of some objective function under some constraints, with the objective function and the constraints depending not only of the free variables, but also of some uncertainty parameter . One then searches for an optimal solution that is valid for all values of the uncertainty parameter . Among robust optimization problems, those that involve successive stages of decision are of particular interest [39]; Here is for instance the two-stage robust optimization problem:
| (2) | ||||
where is the domain of uncertainty, , , and are input matrices and vectors depending on the uncertainty parameter , and is a deterministic input vector. The variables and correspond respectively to the first and second stages of the optimization, where the second stage takes place only once the uncertainty has been revealed. The problem is therefore to optimize in the first stage the worst-case outcome from the second stage.
In 2010, Bertsimas and Caramanis [8] proposed to approximate the solution of Problem (2) by solving the following finite adaptability problem:
| (3) | ||||
(We use to denote the set .) For fixed this problem is called the -adaptability problem. It models the precomputation, in the first stage, of candidate values for the second-stage variable . Once the uncertainty is revealed, the second stage consists in selecting one of the precomputed values that satisfies the constraints and minimizes the objective. Under a suitable continuity assumption, the value of Problem (3) converges to the value of (2) as [27, 2]. See [24, 33, 11, 36] for recent work on Problem (3) and variants.
Throughout the paper we only consider the case where , , , and are affine maps.
1.3 Our results
Problem correspondence.
The continuous hitting set problem stated above corresponds to the decision version of the optimization Problem (3) when there is no first-stage decision (); See Lemma 5. This correspondence is also the motivation for our convention that an affine family of polyhedra with one empty member has no hitting set, as such a family corresponds to an unfeasible problem. Like in the discrete setting, the hitting and covering problems for affine families of polyhedra enjoy some form of duality (see Section 2.1). We prove (Lemma 6) that the decision version of the general finite adaptability problem is a special case of the following covering problem:
Given two affine families of polyhedra in and , does there exist polyhedra in the second family whose union covers a polyhedron from the first family?
Notice that this problem is purely linear, whereas the finite adaptability Problem (3) exhibits some non-linearity.
Complexity.
We then turn our attention to the computational complexity of these problems. We first observe that quantifier elimination methods yield strongly polynomial solutions 111The standard model of computation in computational geometry is the Real-RAM model, a variant of the classical random access machine (RAM) that operates on arbitrary real numbers instead of finite binary words, see [20, ] and [21, ]. Algorithms that run in polynomial time on a Real-RAM and that translate to polynomial-time algorithms in the usual sense are called strongly polynomial. Whether linear programming can be solved in strongly polynomial time remains a major open problem, known as Smale’s 9th problem [34]. for these problems in fixed dimension.
Theorem 1.
For every constant , , , , there is an algorithm that solves the decision version of the -adaptability Problem (3) on constraints, with a polyhedron in given as an intersection of halfspaces, in time strongly polynomial in and .
Similarly, the hitting and covering problems stated above can be solved in time when we fix , the ambient dimensions of the polyhedra and the dimension of the parameter space(s) (see Section 3). The bound in the exponent grows with the values of the fixed parameters, and the fixed-parameter tractability of these problems is a natural question. In the special case where the parameter domain is one-dimensional, we can give an algorithm for the hitting set problem where the order of the complexity does not depend on or the ambient dimension of the polyhedra.
Theorem 2.
For every constant and , there is an algorithm that decides in strongly time, given an affine family of polytopes in , each defined by at most constraints, whether admits a hitting set of size .
If is part of the input, then our algorithm has complexity in the Real-RAM model (Proposition 13) but is no longer strongly polynomial. We can extend the method behind Theorem 2 to the optimization problem:
Corollary 3.
For every constant and , there is an algorithm that solves the -adaptability Problem (3) in the special case where and is an interval, in time strongly .
1.4 Background and related works
1.4.1 Complexity of finite adaptability
Bertsimas and Caramanis proved [8, Prop. 5] that unless , there is no polynomial-time algorithm for the finite adaptability problem (3). This already holds in the special case where , there is no first decision , and is constant, i.e. independent of . Their proof requires, however, that the dimensions of and as well as the number of rows in be part of the input. A natural question is whether the computational complexity becomes polynomial when some of these parameters are constant.
We are only aware of two222Note that Bertsimas and Caramanis gave a reformulation of Problem (3) as a bilinear optimization problem with a linear objective function [8, Prop. 4], but it does not yield a polynomial-time algorithm even when all dimensions and are fixed. Indeed, their reformulation uses variables and bilinear constraints, where is the number of extreme points of the domain . previous tractability results for Problem (3). On the one hand, Subramanyam, Gounaris and Wiesemann [36, Prop. B.3] observed that Problem (3) reduces to linear programming, and is therefore polynomial, when , and are constant (but still depends on ). On the other hand, Kedad-Sidhoum, Meunier and Medvedev [27, Theorems 1.1 and 1.2] proved that if , the number of vertices (and dimension) of is bounded, and only the right-hand side of the constraints depends on the uncertainty parameter (that is, , and are constant), then Problem (3) reduces to a constant number of linear programs and is therefore polynomial in the Turing machine model.
The tractable cases of the decision version of finite adaptability that are identified by Theorem 1 are therefore new in that they allow left-hand-side uncertainty, that is with the map affine rather than constant. This makes the underlying problem non-linear in and . An example of problem from operations research that directly benefits from this tractability result is antenna design [7, 2]. Indeed, it can be modeled by two-stage robust optimization (2) with continuous variables and uncertainty, and the recourse matrices and depend on the uncertainty parameter. Moreover, the fact that the recourse matrices are uncertain makes affine decision rules, the standard approximation method for TSRO [5, 14.3.1], potentially intractable [5, 14.3.3]. Other examples include variants of project management [5, example 14.2.1], supply chain design [23, 4.1], route planning [23, 4.2] and capital budgeting [23, 4.3]. The variants are obtained in two steps: relaxing the discrete variables into continuous ones and making the model more expressive by allowing the recourse matrices to depend on the uncertainty. Both steps are common practice in operations research, so these variants are of interest.
Corollary 3, which shows that without first decision and with one-dimensional uncertainty, finite adaptability can be solved in time for any constant and , is the first fixed-parameter tractability result for this problem that we are aware of.
1.4.2 Related notions in computational geometry
Hitting set problems were extensively investigated in discrete and computational geometry for discrete, unstructured families of sets. One typically expects that deciding whether points suffice to hit given subsets of is NP-hard when or is part of the input; see for instance the landmark results of Megiddo [31] and Megiddo and Supowit [32]. When both and are fixed, the problem can be solved in polynomial time via the computation of an arrangement.
Notions of uncertainty related to robust mathematical programming were also investigated by computational geometers. In this context, uncertainty in the input geometric data is typically modeled by replacing single points by regions (say, disks of some small radius), in which the actual point is guaranteed to lie. One can preprocess the regions, and prepare for the worst-case actual input. Recent examples include sorting and finding the Pareto front of imprecise point sets [38, 37]. We refer to Löffler for a general discussion of imprecision in computational geometry [28].
More generally, continuous families of geometric sets are ubiquitous in computational geometry. In range searching, for instance, we are given a collection of points in -dimensional space, and are asked to construct a data structure that allows to answer queries of the following form: Given a range , report the set of points from the collection that are contained in . The ranges are typically restricted to be axis-aligned boxes, halfspaces, or semialgebraic sets, giving rise to the corresponding orthogonal, halfspace, or semialgebraic range searching problems; see the survey of Agarwal [1] for references. In the problem of finding small -net, the goal is similar to ours, in that we wish to find small hitting sets of continuous family of ranges. More precisely, given a finite set of points together with a continuous family of ranges (typical examples are all halfspaces, or all unit disks), we wish to find a set of points that collectively hit all ranges that contain at least an fraction of the points of [25]. If the set is restricted to be a subset of , then is called a strong -net, and a weak -net otherwise [15]. While this also bears a superficial resemblance with our question, it is yet quite different, since in our case, all members of the input continuous family have to be hit, whatever their size is. Brönimann and Goodrich [10], and then Even, Rawitz, and Shahar [22], showed that finding -nets of size in polynomial time implied the existence of -approximation algorithms for the discrete hitting set problem. In the same spirit, Elbassioni [19] recently proposed an approximation algorithm for finding small hitting sets of infinite range spaces of bounded VC-dimension, and gave an application to the problem of covering a polygonal region by translates of a given polygon.
2 Relation between hitting/covering and finite adaptability
Let us clarify the relation between the hitting and covering problems for affine families of polytopes and the finite adaptability Problem (3). We assume in this section that the domain of uncertainty is a polyhedron in , the matrices , and the vectors , depend affinely on the uncertainty parameter , and the vector is deterministic.
2.1 The hitting-covering duality
Let us start by remarking that for any regions , every affine map on restricts, in a unique way, to an affine map on ; Conversely, every affine map on extends to an affine map on , and this extension is unique if is contained in the affine span of . This implies that any affine family of polyhedra restricts (in a unique way) to an affine family of polyhedra , and that conversely every affine family of polyhedra extends to an affine family of polyhedra , and that this extension is unique if is contained in the affine span of .
Let us fix two affine maps and and consider the maps
so that if and only if . Hence, switching from to exchanges hitting and covering in the sense that for any domain , a set is a hitting set for if and only if . It turns out that is also an affine family of polyhedra, which we call the dual of .
Lemma 4.
If is an affine family of polyhedra in , each defined by constraints, then is an affine family of polyhedra in , each defined by constraints.
Proof.
Let us fix a basis of , so as to write , and , where , . For , we have
Let , so that . Let . Note that with and affine maps. It follows that is an affine family of polyhedra, with ambient space of dimension , parameter space of dimension and each polyhedron defined by constraints. When is a polyhedron in , a natural candidate for the dual of an affine family of polyhedra is . This is again an affine family of polyhedra, which we denote by . In other words, restricting into translates, under duality, into intersecting each dual polyhedron with .
2.2 Direct correspondences
Let us first consider the -adaptability problem (3) without first decision (), that is
| (4) | ||||
Lemma 5.
For any real , the value of the -adaptability problem without first decision (4) is at most if and only if the affine family of polyhedra defined by
admits a hitting set of size .
Proof.
Let . The value of Problem (4) is at most if and only if there exists such that
Hence, the value of Problem (4) is at most if and only if admits a hitting set of size . Observe that in particular, Problem (4) is infeasible if and only if there is no feasible solution , that is no hitting set of size for .
When there is a first decision (), one may proceed as in Lemma 5, mutatis mutandis, and obtain the following characterizations of the fact that the value of (3) is at most :
-
(a)
The affine family of polyhedra defined by
admits a hitting set of size in which all points have the same first coordinates.
-
(b)
There exists such that the affine family of polyhedra defined by
admits a hitting set of size .
In other words, we have to either require the hitting set to lie on some coordinate subspace (a), or work with an affinely parameterized family of affine families of polyhedra (b). Each affine family of polyhedra in (b) corresponds to the slice of the affine family of polyhedra of (a) by a coordinate subspace of codimension .
2.3 A simpler correspondence via lifting
A reformulation of the -adaptability problem (3) neater than those of Section 2.2 can be obtained by linearizing the problem via a “lifting” similar to the classical Veronese map. Specifically, let us fix and and define
Observe that is the surface of dimension defined by the quadratic equations for and . Letting be the projection that forgets the last coordinates, we see that is a bijection with inverse .
We use to recast any set defined by bilinear conditions in between and as the intersection of with a set defined by linear conditions in . For better readability, we let , use to denote a point in , and write and . Now, for , let us decompose into , where . We then define, for every ,
Notice that this definition replaces every bilinear term arising in by its linear “lift” . We also define for every .
Now, for any fixed , and are affine families of polyhedra. For , this follows from the assumption that depends affinely on and are constant matrices. For , this follows from the assumption that is a polyhedron.
Lemma 6.
For any real , the value of the -adaptability problem (3) is at most if and only if there is a member of that can be covered by some members of .
Proof.
Let us fix and start from the characterization (b) from Section 2.2 – from which we recover the notation . The value of the -adaptability problem (3) is at most if and only if there exists such that for every , there exists such that . Letting , the characterization becomes: there exists such that is covered by .
Let us now consider how the lift acts on this characterization. First, is a bijection from to , so is covered by if and only if is covered by . Now, by definition of . On the other hand, the definition of ensures that . The characterization therefore reformulates as: there exists such that is covered by . Since for every , the set is contained in , we can drop the intersections with in that characterization, and the statement follows.
3 Polynomial complexity bounds
We first show that algorithms for quantifier elimination yield, when the parameter and the dimensions are constant, strongly polynomial algorithms for the decision version of the -adaptability Problem (3) as well as for our continuous hitting and covering problems.
3.1 Semi-algebraic sets and first-order formulas
Our proofs involve the analysis of families of polynomial inequalities in , that is of semi-algebraic sets. It will be convenient to formulate these sets in the equivalent form of formulas from the first-order theory of the reals with addition and multiplication (see the book of Basu, Pollack and Roy [4, ]). We will use the following complexity bounds on quantifier elimination.
Theorem 7 ([4, , Theorem 14.16]).
Let be a first-order formula of the form , where , , …, are quantifiers in and is a quantifier free formula. Assume that each set has at most variables and that is built with at most polynomials of degree at most (in variables).
-
(i)
There exists a quantifier free formula equivalent to involving at most polynomials, each of degree at most ,
-
(ii)
this quantifier free formula can be computed in arithmetic operations in the ring generated by the coefficients of the polynomials, and
-
(iii)
if the coefficients of the input polynomials are in and have bitsize at most , then the polynomials that arise in the algorithm have coefficients of bitsize bounded by ,
where and .
We will also use the following complexity bound for the existential theory of the reals.
Theorem 8 ([4, , Theorem 13.13]).
Let be a quantifier free formula with free variables, built on polynomials of degree at most .
-
(i)
The truth of the sentence can be decided in arithmetic operations in the ring generated by the coefficients of the polynomials, and
-
(ii)
if the coefficients of the input polynomials are in and their bitsize is bounded by , the polynomials that arise in the algorithm have coefficients of bitsize bounded by .
3.2 Application to k-adaptability
The decision version of Problem (3) asks, given some real number , whether the optimal value of (3) is at most .
Proof of Theorem 1.
The decision version of Problem 3 can be cast as deciding, given , the validity of the formula
where
and is the affine family of polyhedra defined by
Note that the polyhedron encodes both the constraints enforced by , and on , and the bound on the optimal value (like in the proof of Lemma 4). The universal quantifier on takes care of the in the formulation (3), while the disjunction on the terms of the form takes care of the .
We can apply Theorem 7 to eliminate the universal quantifier in . Referring to the parameters in Theorem 7, we have and , and there are polynomials involved, defining the polyhedra and . Each polynomial is of degree at most , and the total number of free variables is . Hence, the elimination of the quantifier takes time .
We now have a formula with a single existential quantifier on , to which we can apply Theorem 8. Following Theorem 7, the degree is at most , and we can therefore decide the validity of the formula in time
as claimed.
Similarly, we can express the hitting and covering problems for affine families of polyhedra as the decision of the validity of a formula, which can then be handled by Theorems 7 and 8. This gives the following results:
-
For every constant , , and there is an algorithm that decides in time strongly polynomial in , given a polyhedron in defined by constraints and an affine family of polyhedra in , each defined by at most constraints, whether admits a hitting set of size . The complexity is , where the exponent depends on , , and .
-
For every constant , , and there is an algorithm that decides in time strongly polynomial in , given an affine family of polyhedra in defined by at most constraints, and an affine family of polyhedra in , defined by at most constraints, with and polyhedra defined by at most constraints each, whether there exist and such that . The complexity is , where the exponent depends on , , and .
This approach can also be applied to the decision version of the two-stage robust optimization problem (2), yielding an algorithm with higher complexity than for the decision version of finite adaptability (3), see [12, Appendix C]. We also note that similar tools [4, , Algorithm 14.9] can solve the optimization version of the -adaptability Problem (3), with a polyhedron in , in polynomial time for every constant , , and .
4 Hitting a one-parameter family of polyhedra
We now prove Theorem 2: for every constant , the size of the smallest hitting set of a one-parameter affine family of polytopes in can be computed in , where denotes the number of constraints defining each polytope. The proof is essentially a greedy algorithm building on an application of parametric search [29] to a carefully chosen linear-time algorithm for linear programming in fixed dimension, here we use Megiddo’s [30].
4.1 The intersection of an affine family of polyhedra
We define the common intersection region of an affine family of polyhedra in as the set . We use some properties of this region. Even though Section 4 focuses on the one-parameter case (), we find it convenient to establish these property more generally here.
We let denote the set of extreme333Recall that a point in a convex is called extreme if it cannot be expressed as the midpoint of two distinct points in , or equivalently if is convex [26, Chapter A, , Definition 2.3.1]. points of a set , and write for the topological closure of a subset .
Lemma 9.
For any affine family of polyhedra and for any subset of its parameter set, we have (i) , (ii) and, (iii) if is bounded, .
Proof.
Observe first that for every subsets , if , then . It follows that and .
Let us prove that . Let . Consider a finite convex combination where and . By assumption, for all , hits the polyhedron , that is for all , . It follows that
and . This completes the proof of (i).
Let us now prove that . Let . Let . Then there exists a sequence of elements of that converges to . By assumption, for all , hits the polyhedron , that is for all , . Since and are continuous, it comes that , that is . This completes the proof of (ii).
Let us prove . A theorem of Minkowski (see [26, Chapter A, , Theorem 2.3.4]) asserts that if is compact, convex in a finite dimensional euclidean space, then is the convex hull of its extreme points. Hence, if is bounded, is a compact convex set in , and is therefore the convex hull of its extreme points. So we have:
This has an interesting computational consequence, which is already known as a tractable instance of static robust optimization [6, Chapter 1, Corollary 1.3.5 (i)], and which we include here for completeness.
Corollary 10.
Let be a polytope with vertices and let be an affine family of polyhedra in , with every member of the family defined by constraints. Deciding the existence of a one-point hitting set for reduces to solving a linear program with variables and constraints.
Proof.
We apply Lemma 9 (i) to the set . Since , it comes that . That is to say, admits a one-point hitting set if and only if the set admits a one-point hitting set. The latter condition is equivalent to the existence of such that for all , . This is equivalent to finding a feasible solution of a linear program with variables and constraints.
4.2 Structure of a minimum-size hitting set
Let and let be an affine family of polyhedra. By Lemma 9, for any point , the set is a closed interval. For let us define
Note that if , then . Also note that the supremum may be finite but not a maximum, in the sense that may not have a one-point hitting set. Indeed, consider for example and ; For , we have , so that ; Note, however, that , which prevents from having a (one-point) hitting set. When is an affine family of polytopes, we argue that the supremum is always attained.
Lemma 11.
Let and let be an affine family of polytopes. For every , either is empty or there exists such that contains .
Proof.
Let . We can assume that is nonempty, as otherwise the statement is trivial. We can also assume that , as otherwise any point satisfies the condition. Now, suppose, by contradiction, that is empty. Since , we have by Lemma 9 (ii), so is already empty. Helly’s theorem asserts that if a (possibly infinite) family of compact convex sets in has empty intersection, then some of them already have empty intersection. Therefore, there exist in such that is empty. Yet, ensures that there exists such that . In particular, , a contradiction. Let us write . Lemma 11 yields a simple characterization of the size of hitting sets of one-parameter families of polytopes.
Lemma 12.
An affine family of polytopes has a hitting set of size if and only if .
Proof.
Suppose that . Let and for let . By Lemma 11, for there exists such that . Since , we have , and is a hitting set for .
Conversely, suppose that admits a hitting set . Let us write and assume, without loss of generality, that . Observe that, by definition, , hence for all . Also observe that , since otherwise misses for some . We must also have , since otherwise misses , and , otherwise misses . Combining these observations, and using the fact that the function is nondecreasing, we obtain:
Since is at most , we must have , as claimed.
4.3 Parametric search on a linear programming algorithm
Lemma 12 reduces the computation of a minimum-size hitting set for a one-parameter affine family of polytopes to the computation of the function . We now focus on the latter problem.
4.3.1 Comparing is easy
Let us fix some real and write ; we assume that is known, and want to determine . We first notice that we already have efficient algorithms for comparing any input real to : this is equivalent to deciding whether can be hit by a single point, and therefore reduces to testing whether is empty, by Corollary 10, which can be done by solving a linear program with variables and constraints. Let us note that several deterministic algorithms have been proposed to solve such linear program in time when the dimension is fixed [30, 17, 18, 3, 16, 9, 14].
4.3.2 From comparing to computing
It turns out that a general technique, called parametric search [29], can turn444A more effective technique for this purpose was proposed by Chan [13], but unfortunately we could not apply it here. an algorithm for comparing to into an algorithm for computing , the latter having complexity where is the complexity of the former. This suggests that all we need is to apply parametric search to one of these linear-time algorithms for linear programming to get a quadratic-time algorithm to compute . There is, however, a catch: parametric search can only be applied to a comparison algorithm that can be modeled by an algebraic decision tree where the decisions have bounded degree. This is, fortunately, the case of Megiddo’s algorithm [30] (a fact more easily checked from Clarkson’s summary of that algorithm [17]). For completeness, we summarize in the next subsections the model of algebraic decision trees and the parametric search technique.
4.3.3 Background: algebraic decision trees
The algebraic decision tree model of computation [35] represents an algorithm with input as a rooted tree where (i) each internal node is decorated with a polynomial with constant coefficients and with variables some of the input , (ii) for each internal node, the edges towards its descendants are labeled by sign conditions (, , , , or ) that are mutually exclusive and cover all cases, and (iii) each leaf has a label that serves as answer to the algorithm. To execute an algebraic decision tree on an input , we traverse starting from the root, determining at each node the sign of the polynomial on the input and taking the corresponding edge towards a descendant, and return the label of the leaf we eventually reach. Let us stress that this model of computation is nonuniform, in the sense that the tree is actually a family of trees, one per input size . Let us also emphasize that if a polynomial Real RAM algorithm can be modeled by an algebraic decision tree in which every node polynomial has constant degree, then that algorithm is strongly polynomial.
4.3.4 Background: Parametric search
Let be a decision problem depending on a real parameter . Suppose that is monotone in the sense that if is true, then is also true for all . Then, there exists some , possibly . Parametric search [29] (see also [2]) is a technique to turn certain algorithms for evaluating into algorithms for computing . This technique applies to algorithms presented as algebraic decision trees. Intuitively, it consists in identifying the path in that would be followed if that tree was executed on .
Let us sketch a simplification of that technique sufficient for our purpose. We traverse the tree while maintaining an interval known to contain , initially set to . When reaching a node , we check whether the sign of the polynomial is constant over . If it is, we follow the suitable branch and keep unchanged. Otherwise, we compute the set of roots of and use to determine for each ; the monotony ensures that the largest root of such that is true and the smallest root of such that is false are consecutive in ; they determine our new interval , and we follow the branch indicated by the sign of on . We continue this procedure until we reach a leaf, and return the upper endpoint of . Altogether, if is the height of the tree, the parametric search takes calls to on specific values as well as additional computation at each level. In other words, if has complexity , then the parametric search determines the exact value of in time .
4.4 Proof of Theorem 2
Let us first summarize how our algorithm works in the real RAM model.
Proposition 13.
For every constant , there is a Real-RAM algorithm that computes in time , given an affine family of polytopes , each defined by at most constraints, the size of the smallest hitting set of .
Proof.
We are given as input . We put and compute , for , until we reach some value . At this point, we return as the size of the smallest hitting set of . We compute from by performing parametric search on the algorithm that, given some real , uses Megiddo’s algorithm to solve the linear program that determines whether . Since the dimension is fixed, Megiddo’s algorithm takes time to solve one comparison to , and the computation of one takes time. Altogether, the algorithm takes time, where is the size of the smallest hitting set of .
To prove Theorem 2, it remains to analyze the bit complexity of the numbers manipulated by the algorithm of Proposition 13 when is fixed in addition to the ambient dimension of the polytopes. This proof relies on the observation that if is an algebraic decision tree of height that decides a monotone decision problem , then the algorithm obtained by performing parametric search on can also be modeled by a tree of height , which is almost an algebraic decision tree (except for the leaves, that do not output only true or false). The key ingredient for this is the following lemma.
Lemma 14.
For every constant and , there exist algebraic decision trees of constant complexity that, given an integer and a vector describing the coefficients of two univariate polynomials and of degree at most and , respectively, solve the following problems:
-
(i)
does have at least real roots?
-
(ii)
is the value of in the -th real root of strictly positive (resp. strictly negative, zero)?
Proof.
We first reformulate questions (i) and (ii) as formulas from the first-order theory of the reals in a constant number of variables. Let denote for . For (i), the polynomial has at least distinct real roots if and only if the following formula is true:
For (ii), we treat the case of strictly positive sign, the two other cases follow the same path. Observe that the sign of in the th root of is positive if and only if
-
the polynomial has at least distinct real roots ,
-
any other root of is larger than , and
-
.
This holds if and only if the following formula is true (we have appear in the formula only as a value, not as an index, to ensure that eventually the tree depends only on and ):
By Theorem 7, such a formula is equivalent to a quantifier free formula in the input vector and , involving a constant number of polynomials of constant degree. That quantifier-free formula can in turn be modeled by an algebraic decision tree of constant complexity.
We can now complete the proof of Theorem 2.
Proof of Theorem 2.
Let be a vector of real numbers consisting of the parameters that represent . Let denote an algebraic decision tree that models the comparison, using Megiddo’s linear-time algorithm, of to . The input of is , and . The height of is .
Now, consider the real RAM algorithm that computes by performing parametric search on . This algorithm has complexity and takes , and as input. Observe that the value that it outputs is not an arbitrary real: the principles of parametric search ensure that the output is either or a real root of a polynomial determining the branching in one of the nodes of (see Section 4.3.4). We can therefore represent the output of the parametric search by an integer (to mean we are interested in the th real root) and a polynomial of constant degree and whose coefficients are constant degree polynomials of some input coordinates. We can model this parametric search by a tree that is almost an algebraic decision tree: takes and as input, and each leaf contains the description of in the form of the integer and the polynomial . The only aspect in which is not an algebraic decision tree is that it does not solve a decision problem.
We can now append at each leaf of a second copy of where the input parameter has been substituted for the th root of . This only requires changing, in the second copy of , for every branching polynomial , the one-node evaluation of the sign of in by the constant-size subtree evaluating the sign of in the th root of given by Lemma 14. This results in a tree that takes and as input, and where each leaf contains the description of . Again, the principles of parametric search ensures that the output can be presented in the form of an integer and a polynomial of constant degree, the coefficients of which are constant degree polynomials of some input coordinates. Again, the only aspect in which is not an algebraic decision tree is that it does not solve a decision problem. Again, has height .
And so on, for every constant we can construct a tree of height , that takes and as input, and where each leaf contains the description of . Finally, in , we substitute every leaf by a constant-size algebraic decision tree that compares the number stored in that leaf to the input number . The resulting tree is an algebraic decision tree of height that takes and as input, and that compares to . Executing this tree for gives a strongly quadratic algorithm for deciding whether has a -point hitting set.
Proving Corollary 3 now amounts to going from a decision problem to the associated optimization problem.
Proof of Corollary 3.
With Lemma 5 and Theorem 2, we already have that for every constant and , there is an algorithm that solves the decision version of the -adaptability Problem (3) in the special case where and is an interval, in time strongly quadratic in . That algorithm can be modeled by an algebraic decision tree (this is the final tree in the proof of Theorem 2). We can therefore perform parametric search on that algorithm to solve the optimization version of these problems in time strongly quartic in .
5 Concluding remarks
-
1.
The quantifier elimination algorithms do not need the maps , , , and (for the finite adaptability) or the maps and (for families of polyhedra) to be affine, but merely to be polynomial of constant degree. Theorem 1 thus generalizes, mutatis mutandis, to this setting. Note that the assumption that these maps are affine is used in the correspondence between the finite adaptability and the hitting/covering of affine families of polyhedra, in the hitting/covering duality, and in the proof of Theorem 2 and Corollary 3.
-
2.
The assumption in Theorem 2 that be an affine family of polytopes, rather than polyhedra, is used to ensure that is not only a supremum, but actually a maximum (this underlies the proof of Lemma 11). We conjecture that if is a one-parameter family of polyhedra and cannot be hit by a single point, then the polyhedron must be empty. If that is true, then Theorem 2 readily generalizes to affine families of polyhedra.
-
3.
A natural question is whether the hitting set problem for affine families of polyhedra is fixed-parameter tractable with respect to the dimensions (of the ambient space of the polyhedra as well as the parameter space).
References
- [1] Pankaj K. Agarwal. Range searching. In Handbook of discrete and computational geometry, pages 1057–1092. Chapman and Hall/CRC, 2017.
- [2] Pankaj K. Agarwal and Micha Sharir. Efficient algorithms for geometric optimization. ACM Comput. Surv., 30(4):412–458, 1998. doi:10.1145/299917.299918.
- [3] Pankaj K. Agarwal, Micha Sharir, and Sivan Toledo. An efficient multi-dimensional searching technique and its applications. Technical report, Duke University, USA, 1993.
- [4] Saugata Basu, Richard Pollack, and Marie-Françoise Roy. Algorithms in real algebraic geometry, volume 10 of Algorithms and Computation in Mathematics. Springer, 2006. 2nd ed. URL: https://hal.science/hal-01083587.
- [5] A Ben-Tal, L El Ghaoui, and A Nemirovski. Robust optimization–princeton university press, 2009.
- [6] Aharon Ben-Tal, Laurent El Ghaoui, and Arkadi Nemirovski. Robust Optimization, volume 28 of Princeton Series in Applied Mathematics. Princeton University Press, 2009. doi:10.1515/9781400831050.
- [7] Aharon Ben-Tal and Arkadi Nemirovski. Robust optimization–methodology and applications. Mathematical programming, 92:453–480, 2002. doi:10.1007/S101070100286.
- [8] Dimitris Bertsimas and Constantine Caramanis. Finite adaptability in multistage linear optimization. IEEE Trans. Autom. Control., 55(12):2751–2766, 2010. doi:10.1109/TAC.2010.2049764.
- [9] Hervé Brönnimann, Bernard Chazelle, and Jirí Matoušek. Product range spaces, sensitive sampling, and derandomization. SIAM J. Comput., 28(5):1552–1575, 1999. doi:10.1137/S0097539796260321.
- [10] Hervé Brönnimann and Michael T. Goodrich. Almost optimal set covers in finite VC-dimension. Discret. Comput. Geom., 14(4):463–479, 1995. doi:10.1007/BF02570718.
- [11] Christoph Buchheim and Jannis Kurtz. Min-max-min robust combinatorial optimization. Math. Program., 163(1-2):1–23, 2017. doi:10.1007/S10107-016-1053-Z.
- [12] Jean Cardinal, Xavier Goaoc, and Sarah Wajsbrot. Hitting and covering affine families of convex polyhedra, with applications to robust optimization, 2025. doi:10.48550/arXiv.2504.16642.
- [13] Timothy M. Chan. Geometric applications of a randomized optimization technique. Discret. Comput. Geom., 22(4):547–567, 1999. doi:10.1007/PL00009478.
- [14] Timothy M. Chan. Improved deterministic algorithms for linear programming in low dimensions. ACM Trans. Algorithms, 14(3):30:1–30:10, 2018. doi:10.1145/3155312.
- [15] Bernard Chazelle, Herbert Edelsbrunner, Michelangelo Grigni, Leonidas J. Guibas, Micha Sharir, and Emo Welzl. Improved bounds on weak epsilon-nets for convex sets. Discret. Comput. Geom., 13:1–15, 1995. doi:10.1007/BF02574025.
- [16] Bernard Chazelle and Jirí Matoušek. On linear-time deterministic algorithms for optimization problems in fixed dimension. J. Algorithms, 21(3):579–597, 1996. doi:10.1006/JAGM.1996.0060.
- [17] Kenneth L. Clarkson. Linear programming in time. Inf. Process. Lett., 22(1):21–24, 1986. doi:10.1016/0020-0190(86)90037-2.
- [18] Martin E. Dyer. On a multidimensional search technique and its application to the euclidean one-centre problem. SIAM J. Comput., 15(3):725–738, 1986. doi:10.1137/0215052.
- [19] Khaled Elbassioni. A bicriteria approximation algorithm for the minimum hitting set problem in measurable range spaces. Oper. Res. Lett., 51(5):507–514, 2023. doi:10.1016/J.ORL.2023.07.005.
- [20] Jeff Erickson. Concrete models of computation, 2003. URL: https://jeffe.cs.illinois.edu/teaching/497/06-algebraic-tree.pdf.
- [21] Jeff Erickson, Ivor van der Hoog, and Tillmann Miltzow. Smoothing the gap between NP and ER. SIAM J. Comput., 53(6):S20–102, 2024. doi:10.1137/20M1385287.
- [22] Guy Even, Dror Rawitz, and Shimon Shahar. Hitting sets when the VC-dimension is small. Inf. Process. Lett., 95(2):358–362, 2005. doi:10.1016/J.IPL.2005.03.010.
- [23] Grani A Hanasusanto, Daniel Kuhn, and Wolfram Wiesemann. K-adaptability in two-stage robust binary programming. Operations Research, 63(4):877–891, 2015. doi:10.1287/OPRE.2015.1392.
- [24] Grani Adiwena Hanasusanto, Daniel Kuhn, and Wolfram Wiesemann. K-adaptability in two-stage robust binary programming. Oper. Res., 63(4):877–891, 2015. doi:10.1287/OPRE.2015.1392.
- [25] David Haussler and Emo Welzl. epsilon-nets and simplex range queries. Discret. Comput. Geom., 2:127–151, 1987. doi:10.1007/BF02187876.
- [26] Jean-Baptiste Hiriart-Urruty and Claude Lemaréchal. Fundamentals of Convex Analysis. Springer Verlag, Heidelberg, 2001.
- [27] Safia Kedad-Sidhoum, Anton Medvedev, and Frédéric Meunier. Finite adaptability in two-stage robust optimization: asymptotic optimality and tractability. arXiv 2305.05399, 2023. arXiv:2305.05399.
- [28] Maarten Löffler. Data Imprecision in Computational Geometry. PhD thesis, Utrecht University, Netherlands, 2009. URL: http://dspace.library.uu.nl/handle/1874/36022.
- [29] Nimrod Megiddo. Applying parallel computation algorithms in the design of serial algorithms. J. ACM, 30(4):852–865, 1983. doi:10.1145/2157.322410.
- [30] Nimrod Megiddo. Linear programming in linear time when the dimension is fixed. J. ACM, 31(1):114–127, 1984. doi:10.1145/2422.322418.
- [31] Nimrod Megiddo. On the complexity of some geometric problems in unbounded dimension. J. Symb. Comput., 10(3/4):327–334, 1990. doi:10.1016/S0747-7171(08)80067-3.
- [32] Nimrod Megiddo and Kenneth J. Supowit. On the complexity of some common geometric location problems. SIAM J. Comput., 13(1):182–196, 1984. doi:10.1137/0213014.
- [33] Krzysztof Postek and Dick den Hertog. Multistage adjustable robust mixed-integer optimization via iterative splitting of the uncertainty set. INFORMS J. Comput., 28(3):553–574, 2016. doi:10.1287/IJOC.2016.0696.
- [34] Steve Smale. Mathematical problems for the next century. The Mathematical Intelligencer, 20(2):7–15, 1998. doi:10.1007/BF03025291.
- [35] J. Michael Steele and Andrew Chi-Chih Yao. Lower bounds for algebraic decision trees. J. Algorithms, 3(1):1–8, 1982. doi:10.1016/0196-6774(82)90002-5.
- [36] Anirudh Subramanyam, Chrysanthos E. Gounaris, and Wolfram Wiesemann. K-adaptability in two-stage mixed-integer robust optimization. Math. Program. Comput., 12(2):193–224, 2020. Extended version: https://arxiv.org/abs/1706.07097. doi:10.1007/S12532-019-00174-2.
- [37] Ivor van der Hoog, Irina Kostitsyna, Maarten Löffler, and Bettina Speckmann. Preprocessing imprecise points for the pareto front. In Joseph (Seffi) Naor and Niv Buchbinder, editors, Proceedings of the 2022 ACM-SIAM Symposium on Discrete Algorithms, SODA 2022, Virtual Conference / Alexandria, VA, USA, January 9 - 12, 2022, pages 3144–3167. SIAM, 2022. doi:10.1137/1.9781611977073.122.
- [38] Ivor van der Hoog, Irina Kostitsyna, Maarten Löffler, and Bettina Speckmann. Sorting under partial (interval order) information. J. Comput. Geom., 15(1):143–171, 2024. doi:10.20382/JOCG.V15I1A6.
- [39] Ihsan Yanikoglu, Bram L. Gorissen, and Dick den Hertog. A survey of adjustable robust optimization. Eur. J. Oper. Res., 277(3):799–813, 2019. doi:10.1016/J.EJOR.2018.08.031.
