Unique End of Potential Line

This paper studies the complexity of problems in PPAD $\cap$ PLS that have unique solutions. Three well-known examples of such problems are the problem of finding a fixpoint of a contraction map, finding the unique sink of a Unique Sink Orientation (USO), and solving the P-matrix Linear Complementarity Problem (P-LCP). Each of these are promise-problems, and when the promise holds, they always possess unique solutions. We define the complexity class UEOPL to capture problems of this type. We first define a class that we call EOPL, which consists of all problems that can be reduced to End-of-Potential-Line. This problem merges the canonical PPAD-complete problem End-of-Line, with the canonical PLS-complete problem Sink-of-Dag, and so EOPL captures problems that can be solved by a line-following algorithm that also simultaneously decreases a potential function. Promise-UEOPL is a promise-subclass of EOPL in which the line in the End-of-Potential-Line instance is guaranteed to be unique via a promise. We turn this into a non-promise class UEOPL, by adding an extra solution type to EOPL that captures any pair of points that are provably on two different lines. We show that UEOPL $\subseteq$ EOPL $\subseteq$ CLS, and that all of our motivating problems are contained in UEOPL: specifically USO, P-LCP, and finding a fixpoint of a Piecewise-Linear Contraction under an $\ell_p$-norm all lie in UEOPL. Our results also imply that parity games, mean-payoff games, discounted games, and simple-stochastic games lie in UEOPL. All of our containment results are proved via a reduction to a problem that we call One-Permutation Discrete Contraction (OPDC). This problem is motivated by a discretized version of contraction, but it is also closely related to the USO problem. We show that OPDC lies in UEOPL, and we are also able to show that OPDC is UEOPL-complete.


Introduction
Total function problems in NP. The complexity class TFNP contains search problems that are guaranteed to have a solution, and whose solutions can be verified in polynomial time [58]. While it is a semantically defined complexity class and thus unlikely to contain complete problems, a number of syntactically defined subclasses of TFNP have proven very successful at capturing the complexity of total search problems. In this paper, we focus on two in particular, PPAD and PLS. The class PPAD was introduced in [64] to capture the difficulty of problems that are guaranteed total by a parity argument. It has attracted intense attention in the past decade, culminating in a series of papers showing that the problem of computing a Nash-equilibrium in two-player games is PPAD-complete [11,16], and more recently a conditional lower bound that rules out a PTAS for the problem [66]. No polynomial-time algorithms for PPAD-complete problems are known, and recent work suggests that no such algorithms are likely to exist [4,33]. PLS is the class of problems that can be solved by local search algorithms (in perhaps exponentially-many steps). It has also attracted much interest since it was introduced in [47], and looks similarly unlikely to have polynomial-time algorithms. Examples of problems that are complete for PLS include the problem of computing a pure Nash equilibrium in a congestion game [23], a locally optimal max cut [67], or a stable outcome in a hedonic game [30].
Continuous Local Search. If a problem lies in both PPAD and PLS then it is unlikely to be complete for either class, since this would imply an extremely surprising containment of one class in the other. In their 2011 paper [17], Daskalakis and Papadimitriou observed that there are several prominent total function problems in PPAD ∩ PLS for which researchers have not been able to design polynomial-time algorithms. Motivated by this they introduced the class CLS, a syntactically defined subclass of PPAD∩PLS. CLS is intended to capture the class of optimization problems over a continuous domain in which a continuous potential function is being minimized and the optimization algorithm has access to a polynomial-time continuous improvement function. They showed that many classical problems of unknown complexity are in CLS, including the problem of solving a simple stochastic game, the more general problems of solving a Linear Complementarity Problem with a P-matrix, finding an approximate fixpoint to a contraction map, finding an approximate stationary point of a multivariate polynomial, and finding a mixed Nash equilibrium of a congestion game.
CLS problems with unique solutions. In this paper we study an interesting subset of problems that lie within CLS, and have unique solutions.
Contraction. In this problem we are given a function f : R d → R d that is purported to be ccontracting, meaning that for all points x, y ∈ [0, 1] n we have d(f (x), f (y)) ≤ c · d(x, y), where c is a constant satisfying 0 < c < 1, and d is a distance metric. Banach's fixpoint theorem states that if f is contracting, then it has a unique fixpoint [3], meaning that there is a unique point x ∈ R d such that f (x) = x.
P-LCP. The P-matrix linear complementarity problem (P-LCP) is a variant of the linear complementarity problem in which the input matrix is a P-matrix [14]. An interesting property of this problem is that, if the input matrix actually is a P-matrix, then the problem is guaranteed to have a unique solution [14]. Designing a polynomial-time algorithm for P-LCP has been open for decades, at least since the 1978 paper of Murty [62] that provided exponential-time examples for Lemke's algorithm [54] for P-LCPs.

USO.
A unique sink orientation (USO) is an orientation of the edges of an n-dimensional hypercube such that every face of the cube has a unique sink. Since the entire cube is a face of itself, this means that there is a unique vertex of the cube that is a sink, meaning that all edges are oriented inwards. The USO problem is to find this unique sink.
All of these problems are most naturally stated as promise problems. This is because we have no way of verifying up front whether a function is contracting, whether a matrix is a P-matrix, or whether an orientation is a USO. Hence, it makes sense, for example, to study the contraction problem where it is promised that the function f is contracting, and likewise for the other two.
However, each of these problems can be turned into non-promise problems that lie in TFNP. In the case of Contraction, if the function f is not contracting, then there exists a short certificate of this fact. Specifically, any pair of points x, y ∈ R d such that d(f (x), f (y)) > c · d(x, y) give an explicit proof that the function f is not contracting. We call these violations, since they witness a violation of the promise that is inherent in the problem.
So Contraction can be formulated as the non-promise problem of either finding a solution, or finding a violation. This problem is in TFNP because in the case where there is not a unique solution, there must exist a violation of the promise. The P-LCP and USO problems also have violations that can be witnessed by short certificates, and so they can be turned into non-promise problems contained in the same way, and these problems also lie in TFNP.
For Contraction and P-LCP we actually have the stronger result that both problems are in CLS [17]. Prior to this work USO was not known to lie in any non-trivial subclass of TFNP, and placing USO into a non-trivial subclass of TFNP was identified as an interesting open problem by Kalai [51,Problem 6].
We remark that not every problem in CLS has the uniqueness properties that we identify above. For example, the KKT problem [17] lies in CLS, but has no apparent notion of having a unique solution. The problems that we identify here seem to share the special property that there is a natural promise version of the problem, and that promise problem always has a unique solution.

Our contribution
In this paper, we define a complexity class that naturally captures the properties exhibited by problems like Contraction, P-LCP, and USO. In fact, we define two new sub-classes of CLS.
End of potential line. The complexity class EOPL contains every problem that can be reduced in polynomial time to the problem EndOfPotentialLine, which we define in this paper (Definition 9). The EndOfPotentialLine problem unifies in an extremely natural way the circuit-based views of PPAD and of PLS. The canonical PPAD-complete problem is EndOfLine, a problem that provides us with an exponentially large graph consisting of lines and cycles, and asks us to find the end of one of the lines. The canonical PLS-complete problem provides us with an exponentially large DAG, whose acyclicity is guaranteed by the existence of a potential function that increases along each edge. The problem EndOfPotentialLine is an instance of EndOfLine that also has a potential function that increases along each edge.
So the class EOPL captures problems that admit a single combinatorial proof of their joint membership in the classes PPAD of fixpoint problems and PLS of local search problems, and is a combinatorially-defined alternative to the class CLS. We are able to show that EOPL ⊆ CLS (Corollary 11), by providing a polynomial-time reduction from EndOfPotentialLine to the EndOfMeteredLine problem defined by Hubáček and Yogev [45], which they have shown to lie in CLS.
We remark that it is an interesting open problem to determine whether EOPL = CLS. The inspiration behind both classes was to capture problems in PPAD ∩ PLS. The class CLS does this by affixing a potential function to the PPAD-complete Brouwer fixpoint problem, while EOPL does this affixing a potential function to the PPAD-complete problem EndOfLine. The class EOPL is not the main focus of this paper, however.
Unique end of potential line. An EndOfPotentialLine instance consists of an exponentially large graph that contains only lines (the cycles that can appear in EndOfLine instances are ruled out by the potential function.) The problem explicitly gives us the start of one of these lines. A solution to the problem is a vertex that is at the end of any line, other than the given start vertex. We could find a solution by following the line from the start vertex until we find the other end, although that may take exponential time. There may be many other lines in the graph though, and the starts and ends of these lines are also solutions.
We define the promise-problem PromiseUniqueEOPL, in which it is promised that there is a unique line in the graph. This line must be the one that begins at the given starting vertex, and so the only solution to the problem is the other end of that line. Thus, if the promise is satisfied, the problem has a unique solution. We can define the promise-class PromiseUEOPL which contains all promise-problems that can be reduced in polynomial-time to PromiseUniqueEOPL.
We are not just studying promise problems in this paper, however. We can turn PromiseU-niqueEOPL into a non-promise problem by defining appropriate violations. One might imagine that a suitable violation would be a vertex that is the start of a second line. Indeed, if we are given the promise that there is no start to a second line, then we do obtain the problem Promise-UniqueEOPL. However, with just this violation, we obtain a problem that is identical to EndOf-PotentialLine, which is not what we are intending to capture.
Instead we add a violation that captures any pair of vertices v and u that are provably on different lines, even if v and u are in the middle of their respective lines. We do this by using the potential function: if v and u have the same potential, then they must be on different lines, and likewise if the potential of u lies between the potential of v and the potential of the successor of v. We formalise this as the problem UniqueEOPL (Definition 12), and we define the complexity class UniqueEOPL to contain all (non-promise) problems that can be reduced in polynomial-time to UniqueEOPL.
We have that UniqueEOPL ⊆ EOPL by definition, since UniqueEOPL simply adds an extra type of violation to EndOfPotentialLine. We remark that this new violation makes the problem substantially different from EndOfPotentialLine. In EndOfPotentialLine only the starts and ends of lines are solutions, while in UniqueEOPL there are many more solutions whenever there is more than one line in the instance. As such, we view UniqueEOPL as capturing a distinct subclass of problems in EOPL, and we view it as the natural class for promise-problems in PPAD ∩ PLS that have unique solutions.
UEOPL containment results. We show that USO, P-LCP, and a variant of the Contraction problem all lie in UniqueEOPL. We define the concept of a promise-preserving reduction, which is a polynomial-time reduction between two problems A and B, with the property that if A is promised to have a unique solution, then the instance of B that is produced by the reduction will also have a unique solution. All of the reductions that we produce in this paper are promise-preserving, which means that whenever we show that a problem is in UniqueEOPL, we also get that the corresponding promise problem lies in PromiseUEOPL.
For the USO problem, our UniqueEOPL containment result substantially advances our knowledge about the problem. Prior to this work, the problem was only known to lie in TFNP, and Kalai [51,Problem 6] had posed the challenge to place it in some non-trivial subclass of TFNP. Our result places USO in UniqueEOPL, EOPL, CLS, PPAD (and hence PPA and PPP), and PLS, and so we answer Kalai's challenge by placing the problem in all of the standard subclasses of TFNP.
This result is in some sense surprising. Although every face of a USO has a unique sink, the orientation itself may contain cycles, and so there is no obvious way to define a potential function for the problem. Moreover, none of the well-known algorithms for solving USOs [40,74] have the line-following nature needed to produce an EndOfLine instance. Nevertheless, our result shows that one can produce an EndOfLine instance that has potential function for the USO problem.
Theorem 2 (cf. Theorem 38 and Theorem 39). There are two different variants of the P-LCP problem, both of which lie in UniqueEOPL under promise-preserving reductions.
We actually provide two different promise-preserving reductions from P-LCP to UniqueEOPL. The issue here is that there are many possible types of violation that one can define for P-LCP. So far, the standard formulation of P-LCP either asks for a solution, or a non-positive principle minor of the input matrix. A matrix is a P-matrix if and only if all of its principle minors are positive, and so this is sufficient to define a total problem.
The reduction from P-LCP to EndOfLine can map the start and end of each line back to either a solution or a non-positive principle minor. Our problem is that the extra violations in the UniqueEOPL instance, corresponding to a proof that there are multiple lines, do not easily map back to non-positive principle minors. They do however map back to other short certificates that the input matrix is not a P-matrix. For example, a matrix is a P-matrix if and only if it does not reverse the sign of any non-zero vector [14]. So we can also formulate P-LCP as a total problem that asks for a solution, or a non-zero vector whose sign is reversed.
We study the following variants of P-LCP. The first variant asks us to either find a solution, or find a non-positive principle minor, or find a non-zero vector whose sign is reversed. The second variant asks us to either find a solution, or a non-positive principle minor, or a third type of violation whose definition is inspired by a violation of the USO property. In all cases, we either solve the problem, or obtain a short certificate that the input was not a P-matrix, although the format of these certificates can vary.
It is not clear whether these variants are equivalent under polynomial-time reductions, as one would need to be able to map one type of violation to the other efficiently. We remark that if one is only interested in the promise problem, then the choice of violations is irrelevant. Both of our reductions show that promise P-LCP lies in PromiseUEOPL.
Theorem 3 (cf. Theorem 34). Finding the fixpoint of a piecewise linear contraction map in the ℓ p norm is in UniqueEOPL under promise-preserving reductions, for any p ∈ N ∪ {∞}.
For Contraction, we study contraction maps specified by piecewise linear functions that are contracting with respect to an ℓ p norm. This differs the contraction problem studied previously [17], where the function is given by an arbitrary arithmetic circuit. To see why this is necessary, note that although every contraction map has a unique fixpoint, if we allow the function to be specified by an arbitrary arithmetic circuit, then there is no guarantee that the fixpoint is rational. So it is not clear whether finding the exact fixpoint of a contraction map even lies in FNP.
Prior work has avoided this issue by instead asking for an approximate fixpoint, and the problem of finding an approximate fixpoint of a contraction map specified by an arithmetic circuit lies in CLS [17]. However, if we look for approximate fixpoints, then we destroy the uniqueness property that we are interested in, because there are infinitely many approximate fixpoint solutions surrounding any exact fixpoint.
So, we study the problem where the function is represented by a LinearFIXP arithmetic circuit [22], which is a circuit in which the multiplication of two variables is disallowed. This ensures that, when the function actually is contracting, there is a unique rational fixpoint that we can produce. We note that this is still an interesting class of contraction maps, since it is powerful enough to represent simple-stochastic games [22].
We place this problem in UniqueEOPL via a promise-preserving reduction. Our reduction can produce multiple types of violation. In addition to the standard violation of a pair of points at which f is not contracting, our reduction sometimes produces a different type of violation (cf. Definition 32), which while not giving an explicit violation of contraction, still gives a short certificate that f is not contracting. • Solving a mean-payoff game.
• Solving a discounted game.
• Solving a simple-stochastic game.
• Solving the ARRIVAL problem.
Finally, we observe that our results prove that several other problems lie in UniqueEOPL. The simple-stochastic game (SSG) problem is known to reduce to Contraction [22] and to P-LCP [39], and thus our two results give two separate proofs that the SSG problem lies in UniqueEOPL. It is known that discounted games can be reduced to SSGs [77], mean-payoff games can be reduced to discounted games [77], and parity games can be reduced to mean-payoff games [65]. So all of these problems lie in UniqueEOPL too. Finally, Gärtner et al. [35] noted that the ARRIVAL problem [20] lies in EOPL, and in fact their EndOfPotentialLine instance always contains exactly one line, and so the problem also lies in UniqueEOPL.
We remark that none of these are promise-problems. Each of them can be formulated so that they unconditionally have a unique solution. Hence, these problems seem to be easier than the problems captured by UniqueEOPL, since problems that are complete for UniqueEOPL only have a unique solution conditioned on the promise that there are no violations.
A UEOPL-complete problem. In addition to our containment results, we also give a UniqueEOPLcompleteness result. Specifically, we show that One-Permutation Discrete Contraction (OPDC) is complete for UniqueEOPL.
OPDC is a problem that is inspired by both Contraction and USO. Intuitively, it is a discrete version of Contraction. The inputs to the problem are (a concise representation of) a discrete grid of points P covering the space [0, 1] d , and a set of So the direction function for dimension i simply points in the direction that f moves in dimension i. To solve the problem, we seek a point p ∈ P such that D i (p) = zero for all i, which corresponds to a fixpoint of f .
Why is the problem called One Permutation Discrete Contraction? This is due to some extra constraints that we place on the problem. An interesting property of a function f that is contracting with respect to an ℓ p norm is that if we restrict the function to a slice, meaning that we fix some of the coordinates and let others vary, then the resulting function is still contracting. We encode this property into the OPDC problem, by insisting that slices should also have unique fixpoints when we ignore the dimensions not in the slice. However, we do not do this for all slices, but only for i-slices in which the last d − i + 1 coordinates have been fixed. In this sense, our definition depends on the order of the dimensions. If we rearranged the dimensions into a different permutation, then we would obtain a different problem, so each problem corresponds to some particular permutation of the dimensions. The name of the problem was chosen to reflect this fact.
We remark that, although the problem was formulated as a discretization of Contraction, it is also closely related to the USO problem. Specifically, if we take the set of points P to be the {0, 1} n hypercube, then the direction functions actually specify an orientation of the cube. Moreover, the condition that every slice should have have unique fixpoint exactly corresponds to the USO property that every face should have a unique sink. However, since we only insist on this property for i-slices, OPDC can be viewed as a variant of USO in which only some of the faces have unique sinks.
OPDC actually plays a central role in the paper. We reduce Piecewise-Linear Contraction, USO, and P-LCP to it, and we then reduce OPDC to UniqueEOPL, as shown in Figure 1. The reduction from OPDC to UniqueEOPL is by far the most difficult step.
In the case where the promise is satisfied, meaning that the OPDC instance has a unique fixpoint, the reduction defines a single line that starts at the point p ∈ P with p i = 0 for all i, and ends at the unique fixpoint. This line walks around the grid, following the direction functions given by D in a specific manner that ensures that it will find the fixpoint, while also decreasing a potential function at each step. We need to very carefully define the vertices of this line, to ensure that the line is unique, and for this we crucially rely on the fact that every i-slice also has a unique fixpoint.
This does not get us all the way to UniqueEOPL though, because the line we describe above lacks a predecessor circuit. In UniqueEOPL, each vertex has a predecessor, a successor and a potential, but the line we construct only gives successors and potentials to each vertex. To resolve this, we apply the pebbling game reversibility argument introduced by Bitansky et al [4], and later improved by Hubáček and Yogev [45]. Using this technique allows us to produce a predecessor circuit, as long as there is exactly one line in the instance.
Our reduction also handles violations in the OPDC instance. The key challenge here is that the pebbling game argument assumes that there is exactly one line, and so far it has only been applied to promise-problems. We show that the argument can be extended to work with non-promise problems that may have multiple lines. This can cause the argument to break, specifically when multiple lines are detected, but we are able to show that these can be mapped back to violations in the OPDC instance.
Theorem 6 (cf. Theorem 25). OPDC is UniqueEOPL-complete under promise-preserving reductions, even when the set of points P is a hypercube.  We show that OPDC is UniqueEOPL-hard by giving a polynomial-time promise-preserving reduction from UniqueEOPL to OPDC. This means that OPDC is UniqueEOPL-complete, and the variant of OPDC in which it is promised that there are no violations is PromiseUEOPLcomplete.
Our reduction produces an OPDC instances where the set of points P is the boolean hypercube {0, 1} n . In the case where the UniqueEOPL instance has no violations, meaning that it contains a single line, the reduction embeds this line into the hypercube. To do this, it splits the line in half. The second half is embedded into a particular sub-cube, while the first half is embedded into all other sub-cube. This process is recursive, so each half of the line is again split in half, and further embedded into sub-cubes. The reduction ensures that the only fixpoint of the instance corresponds to the end of the line. If the UniqueEOPL instance does have violations, then this embedding may fail. However, in any instance where the embedding fails, we are able to produce a violation for the original UniqueEOPL instance.
We remark that this hardness reduction makes significant progress towards showing a hardness result for Contraction and USO. As we have mentioned, OPDC is a discrete variant of Contraction, and when the set of points is a hypercube, the problem is also very similar to USO.
The key difference is that OPDC insists that only i-slices should have a unique fixpoint, whereas Contraction and USO insist that all slices should have unique fixpoints. To show a hardness result for either of those two problems, one would need to produce an OPDC instance with that property.
New algorithms. Our final contributions are algorithmic and arise from the structural insights provided by our containment results. Using the ideas from our reduction from Piecewise-Linear Contraction to UniqueEOPL, we obtain the first polynomial-time algorithms for finding fixpoints of Piecewise-Linear Contraction maps in fixed dimension for any ℓ p norm, where previously such algorithms were only known for ℓ 2 and ℓ ∞ norms. If this input is not a contraction map, then our algorithm may instead produce a short certificate that the function is not contracting.
We also show that these results can be extended to the case where the contraction map is given by a general arithmetic circuit. In this case, we provide polynomial-time algorithm that either finds an approximate fixpoint, or produces a short certificate that the function is not contracting.
An interesting consequence of our algorithms is that it is now unlikely that ℓ p -norm Contraction in fixed-dimension is CLS-complete. This should be contrasted to the recent result of Daskalakis et al. [18], who showed the variant of the contraction problem where a metric is given as part of the input is CLS-complete, even in dimension 3. Our result implies that it is unlikely that this can be directly extended to ℓ p norms, at least not without drastically increasing the number of dimensions in the instance.
Finally, as noted in [35], one of our reductions from P-LCP to EndOfPotentialLine allows a technique of Aldous [2] to be applied, which in turn gives the fastest known randomized algorithm for P-LCP.

Related work
CLS. Recent work by Hubáček and Yogev [45] proved lower bounds for CLS. They introduced a problem known as EndOfMeteredLine which they showed was in CLS, and for which they proved a query complexity lower bound of Ω(2 n/2 / √ n) and hardness under the assumption that there were one-way permutations and indistinguishability obfuscators for problems in P /poly . Another recent result showed that the search version of the Colorful Carathéodory Theorem is in PPAD ∩ PLS, and left open whether the problem is also in CLS [60]. Until recently, it was not known whether there was a natural CLS-complete problem. In their original paper, Daskalakis and Papadimitriou suggested two natural candidates for CLS-complete problems, ContractionMap and P-LCP, which we study in this paper. Recently, two variants of ContractionMap have been shown to be CLS-complete. Whereas in the original definition of ContractionMap it is assumed that an ℓ p or ℓ ∞ norm is fixed, and the contraction property is measured with respect to the metric induced by this fixed norm, in these two new complete variants, a metric [18] and meta-metric [24] are given as input to the problem 2 .
P-LCP. Papadimitriou showed that P-LCP, the problem of solving the LCP or returning a violation of the P-matrix property, is in PPAD [64] using Lemke's algorithm. The relationship between Lemke's algorithm and PPAD has been studied by Adler and Verma [1]. Later, Daskalakis and Papadimitrou showed that P-LCP is in CLS [17], using the potential reduction method in [53]. Many algorithms for P-LCP have been studied, e.g., [52,61,62]. However, no polynomial-time algorithms are known for P-LCP, or for the promise version where one can assume that the input matrix is a P-matrix.
The best known algorithms for P-LCP are based on a reduction to Unique Sink Orientations (USOs) of cubes [73]. For an P-matrix LCP of size n, the USO algorithms of [74] apply, and give a deterministic algorithm that runs in time O(1.61 n ) and a randomized algorithm with expected running time O(1.43 n ). The application of Aldous' algorithm [2] to the UniqueEOPL instance that we produce from a P-matrix LCP takes expected time 2 n/2 · poly(n) = O(1.4143 n ) in the worst case.
Unique Sink Orientations. In this paper we study USOs of cubes, a problem that was first studied by Stickney and Watson [73] in the context of P-matrix LCPs. A USO arising from a P-matrix may be cyclic. Motivating by Linear Programming, acylic, AUSOs have also been studied, both for cubes and general polytopes [37,42]. Recently Gärtner and Thomas studied the computational complexity of recognizing USOs and AUSOs [38]. They found that the problem is coNP-complete for USOs and PSPACE-complete for AUSOs. A series of papers provide upper and lower bounds for specific algorithms for solving (A)USOs, including [28,29,40,56,69,74,75]. An AUSOs on a n-dimensional cube can be solved in subexponential time, by the RANDOM-FACET algorithm, which is essentially tight for this algorithm [34]. An almost quadratic lower bound on the number of vertex evaluations needed to solve a general USO is known [68]; unlike for AUSOs, the best running times known for general USOs, as for P-matrix LCPs, are exponential. To be best of our knowledge, we are first to study the general problem of solving a USO from a complexity-theoretic point of view.
Contraction. The problem of computing a fixpoint of a continuous map f : D → D with Lipschitz constant c has been extensively studied, in both continuous and discrete variants [9,10,19]. For arbitrary maps with c > 1, exponential bounds on the query complexity of computing fixpoints are known [8,41]. In [6,44,72], algorithms for computing fixpoints for specialized maps such as weakly (c = 1) or strictly (c < 1) contracting maps are studied. For both cases, algorithms are known for the case of ℓ 2 and ℓ ∞ norms, both for absolute approximation (||x − x * || ≤ ǫ where x * is an exact fixpoint) and relative approximation (||x − f (x)|| ≤ ǫ). A number of algorithms are known for the ℓ 2 norm handling both types of approximation [43,63,71]. There is an exponential lower bound for absolute approximation with c = 1 [71]. For relative approximation and a domain of dimension d, an O(d · log 1/ǫ) time algorithm is known [43]. For absolute approximation with c < 1, an ellipsoid-based algorithm with time complexity O(d·[log(1/ǫ)+log(1/(1−c))]) is known [43]. For the ℓ ∞ norm, [70] gave an algorithm to find an ǫ-relative approximation in time O(log(1/ǫ) d ) which is polynomial for constant d. In summary, for the ℓ 2 norm polynomial-time algorithms are known for strictly contracting maps; for the ℓ ∞ norm algorithms that are polynomial time for constant dimension are known. For arbitrary ℓ p norms, to the best of our knowledge, no polynomial-time algorithms for constant dimension were known before this paper.
Infinite games. Simple Stochastic Games are related to Parity games, which are an extensively studied class of two-player zero-sum infinite games that capture important problems in formal verification and logic [21]. There is a sequence of polynomial-time reductions from parity games to mean-payoff games to discounted games to simple stochastic games [36,39,50,65,77]. The complexity of solving these problems is unresolved and has received much attention over many years (see, for example, [5,13,27,28,48,77]). In a recent breakthrough [7], a quasi-polynomial time algorithm for parity games have been devised, and there are now several algorithms with this running time [7,26,49]. For mean-payoff, discounted, and simple stochastic games, the bestknown algorithms run in randomized subexponential time [55]. The existence of a polynomial time algorithm for solving any of these games would be a major breakthrough. Simple stochastic games can also be reduced in polynomial time to Piecewise-Linear Contraction with the ℓ ∞ norm [22].

Future directions
A clear direction for future work is to show that further problems are UniqueEOPL-complete. We have several conjectures.
We think that, among our three motivating problems, USO is the most likely to be UniqueEOPLcomplete. Our hardness proof for OPDC already goes some way towards proving this, since we showed that OPDC was hard even when the set of points is a hypercube. The key difference between OPDC on a hypercube and USO is that OPDC only requires that the faces corresponding to i-slices should have unique sinks, while USO requires that all faces should have unique sinks.
Conjecture 2. Piecewise-Linear Contraction in an ℓ p norm is hard for UniqueEOPL.
Our OPDC hardness result also goes some way towards showing that Piecewise-Linear Contraction is hard, however there are more barriers to overcome here. In addition to the i-slice vs. all slice issue, we would also need to convert the discrete OPDC problem to the continuous contraction problem. Converting discrete problems to continuous fixpoint problems has been well-studied in the context of PPAD-hardness reductions [16,59], but here the additional challenge is to carry out such a reduction while maintaining the contraction property.
Aside from hardness, we also think that the relationship between Contraction and USO should be explored further. Our formulation of the OPDC problem exposes significant similarities between the two problems, which until this point have not been recognised. Can we reduce USO to Contraction in polynomial time?
Of all of our conjectures, this will be the most difficult to prove. Since P-LCP reduces to USO, the hardness of USO should be resolved before we attempt to show that P-LCP is hard. One possible avenue towards showing the hardness of P-LCP might be to reduce from Piecewise-Linear Contraction. Our UniqueEOPL containment proof for Piecewise-Linear Contraction makes explicit use of the fact that the problem can be formulated as an LCP, although in that case the resulting matrix is not a P-matrix. Can we modify the reduction to produce a P-matrix?
The question of EOPL vs CLS is unresolved, and we actually think it could go either way. One could show that EOPL = CLS by placing either of the two known CLS-complete Contraction variants into EOPL [18,24]. If the two classes are actually distinct, then it is interesting to ask which of the problems in CLS are also in EOPL.
On the other hand, we believe that UniqueEOPL is a strict subset of EOPL. The evidence for this is that the extra violation in UniqueEOPL that does not appear in EndOfPotentialLine changes the problem significantly. This new violation will introduce many new solutions whenever there are multiple lines in the instance, and so it is unlikely, in our view, that one could reduce EndOfPotentialLine to UniqueEOPL. Of course, there is no hope to unconditionally prove that UniqueEOPL ⊂ EOPL, but we can ask for further evidence to support the idea. For example, can oracle separations shed any light on the issue?
Finally, we remark that UniqueEOPL is the closest complexity class to FP, among all the standard sub-classes of TFNP. However, we still think that further subdivisions of UniqueEOPL will be needed. Specifically, we do not believe that simple stochastic games, or any of the problems that can be reduced to them, are UniqueEOPL-complete, since all of these problems have unique solutions unconditionally. Further research will be needed to classify these problems.

Unique End of Potential Line
In this section we define two new complexity classes called EOPL and UniqueEOPL. These two classes are defined by merging the definitions of PPAD and PLS, so we will begin by recapping those classes.
The complexity class PPAD contains every problem that can be reduced to EndOfLine [64].
Intuitively, the problem defines an exponentially large graph where all vertices vhave in-degree and out-degree at most one. Each bit-string in {0, 1} n defines a vertex, while the functions S and P define successor and predecessor functions for each vertex. A directed edge exists from vertex x and y if and only if S(x) = y and P (y) = x. Any vertex x for which P (S(x)) = x has no outgoing edge, while every vertex y with S(P (x)) = x has no incoming edge.
The condition that P (0 n ) = 0 n = S(0 n ) specifies that the vertex 0 n has no incoming edge, and so it is the start of a line. To solve the problem, we must find either a solution of type (E1), which is a vertex x that is the end of a line, or a solution of type (E2), which is a vertex x other than 0 n that is the start of a line. Since we know that the graph has at least one source, there must at least exist a solution of type (E1), and so the problem is in TFNP.
The complexity class PLS contains every problem that can be reduced to the Once again, the problem specifies an exponentially large graph on the vertex set {0, 1} n , but this time the only guarantee is that each vertex has out-degree one. The circuit S gives a successor function. In this problem, some bit-strings do not correspond to vertices in the graph. Specifically, if we have S(x) = x for some bit-string x ∈ {0, 1} n , then x does not encode a vertex.
The second circuit V gives a potential to each vertex from the set {0, 1, . . . , 2 m − 1}. An edge exists in the graph if and only if the potential increases along that edge. Specifically, there is an edge from x to y if and only if S(x) = y and V (x) < V (y). This restriction means that the graph must be a DAG.
To solve the problem, we must find a sink of the DAG, ie., a vertex that has no outgoing edge. Since we require that S(0 n ) = 0 n , we know that the DAG has at least one vertex, and therefore it must also have at least one sink. This places the problem in TFNP.
End of potential line. We define a new problem called EndOfPotentialLine, which merges the two definitions of EndOfLine and SinkOfDag into a single problem.
This problem defines an exponentially large graph where each vertex has in-degree and outdegree at most one (as in EndOfLine) that is also a DAG (as in SinkOfDag). An edge exists from x to y if and only if S(x) = y, P (y) = x, and V (x) < V (y). As in SinkOfDag, only some bit-string encode vertices, and we adopt the same idea that if S(x) = x for some bit-string x, then x does not encode a vertex.
So we have a single instance that is simultaneously an instance of EndOfLine and an instance of SinkOfDag. To solve the problem, it suffices to solve either of these problems. Solutions of type (R1) consist of vertices x that are either the end of a line, or the start of a line (excluding the case where x = 0 n ). Solutions of type (R2) consist of any point x where the potential does not strictly increase on the edge between x and S(x).
The complexity class EOPL. We define the complexity class EOPL to consist of all problems that can be reduced in polynomial time to EndOfPotentialLine. By definition the problem lies in PPAD ∩ PLS, since one can simply ignore solutions of type (R2) to obtain an EndOfLine instance, and ignore solutions of type (R1) to obtain a SinkOfDag instance.
In fact we are able to show the stronger result that EOPL ⊆ CLS. To do this, we reduce EndOfPotentialLine to the problem EndOfMeteredLine, which was defined by Hubáček and Yogev, who also showed that the problem lies in CLS [45]. The main difference between the two problems is that EndOfMeteredLine requires that the potential increases by exactly one along each edge. The reduction from EndOfMeteredLine to EndOfPotentialLine is straightforward. The other direction is more involved, and requires us to insert new vertices into the instance. Specifically, if there is an edge between a vertex x and a vertex y, but V (y) = V (x)+1, then we need to insert a new chain of vertices of length V (y) − V (x) − 1 between x and y, so that we can ensure that the potential always increases by exactly one along each edge. The full details are given in Appendix A, where the following theorem is proved.
As we have mentioned, Hubáček and Yogev have shown that EndOfMeteredLine lies in CLS [45], so we get the following corollary.
Problems with unique solutions. The problems that we study in this paper all share a specific set of properties that cause them to produce an interesting subclass of EndOfPotentialLine instances. Each of the problems that we study have a promise, and if the promise is satisfied the problem has a unique solution.
For example, in the Contraction problem, we are given a function f : [0, 1] d → [0, 1] d that is promised to be contracting, meaning that d(f (x), f (y)) ≤ c · f (x, y) for some positive constant c < 1 and some distance metric d. We cannot efficiently check whether f is actually contracting, but if it is, then Banach's fixpoint theorem states that f has a unique fixpoint [3]. If f is not contracting, then there will exist violations that can be witnessed by short certificates. For Contraction, a violation is any pair of points x, y such that d(f (x), f (y)) > c · f (x, y).
We can use violations to formulate the problem as a non-promise problem that lies in TFNP. Specifically, if we ask for either a fixpoint or a violation of contraction, then the contraction problem is total, because if there is no fixpoint, then the contrapositive of Banach's theorem implies that there must exist a violation of contraction.
Unique End of Potential Line. When we place this type of problem in EOPL, we obtain an instance with extra properties. Specifically, if the original problem has no violations, meaning that the promise is satisfied, then the EndOfPotentialLine instance will contain a single line that starts at 0 n , and ends at the unique solution of the original problem. This means that, if we ever find two distinct lines in our EndOfPotentialLine instance, then we immediately know that original instance fails to satisfy the promise.
We define the following problem, which is intended to capture these properties. (UV1) A point x ∈ {0, 1} n such that x = S(x), P (S(x)) = x, and V (S(x)) − V (x) ≤ 0.
(UV3) Two points x, y ∈ {0, 1} n , such that x = y, x = S(x), y = S(y), and either V ( Violations of type (UV3) give another witness that the instance contains more than one line. This is encoded by a pair of vertices x and y, with either V (x) = V (y), or with the property that the potential of y lies between the potential of x and S(x). Since we require the potential to strictly increase along every edge of a line, this means that y cannot lie on the same line as x, since all vertices before x in the line have potential strictly less than V (x), while all vertices after S(x) have potential strictly greater than V (S(x)).
We remark that (UV2) by itself already captures the property "there is a unique line", since if a second line cannot start, then it cannot exist. So why do we insist on the extra type of violation given by (UV3)? Violations of type (UV3) allow us to solve the problem immediately if we ever detect the existence of multiple lines. Note that this is not the case if we only have solutions of type (UV2), since we may find two vertices on two different lines, but both of them may be exponentially many steps away from the start of their respective lines.
By adding (UV3) solutions, we make the problem easier than EndOfPotentialLine (note that without UV3, the problem is actually the same as EndOfPotentialLine). This means that problems that can be reduced to UniqueEOPL have the very special property that, if at any point you detect the existence of multiple lines, either through the start of a second line, or through a violation in (UV3), then you immediately get a violation in the original problem without any extra effort. All of the problems that we study in this paper share this property.
The complexity class UniqueEOPL. We define the complexity class UniqueEOPL to be the class of problems that can be reduced in polynomial time to UniqueEOPL. We note that UniqueEOPL ⊆ EOPL is trivial, since the problem remains total even if we disallow solutions of type UV3.
For each of our problems, it is also interesting to consider the complexity of the promise variant, in which it is guaranteed via a promise that no violations exist. We define PromiseUniqueEOPL to be the promise version of UniqueEOPL in which 0 n is the only start of a line (and hence there are no solutions that are type (UV2) or (UV3)). We define the complexity class PromiseUEOPL to be the class of promise problems that can be reduced in polynomial time to PromiseUniqueEOPL.
Promise-preserving reductions. The problem UniqueEOPL has the interesting property that, if it is promised that there are no violation solutions, then there must be a unique solution. All of the problems that we study in this paper share this property, and indeed when when we reduce them to UniqueEOPL, the resulting instance will have a unique line whenever the original problem has no violation solutions.
We formalise this by defining the concept of a promise-preserving reduction. This is a reduction between two problems A and B, both of which have proper solutions and violation solutions. The reduction is promise-preserving if, when it is promised that A has no violations, then the resulting instance of B also has no violations. Hence, if we reduce a problem to UniqueEOPL via a chain of promise-preserving reductions, and we know that there are no violations in the original problem, then there is a unique line ending at the unique proper solution in the UniqueEOPL instance.
Note that this is more restrictive than a general reduction. We could in principle produce a reduction that took an instance of A, where it is promised that there are no violations, and produce an instance of B that sometimes contains violations. By using promise-preserving reductions, we are showing that our problems have the natural properties that one would expect for a problem in UniqueEOPL. Specifically, that the promise version has a unique solution, and that this can be found by following the unique line in the UniqueEOPL instance.
One added bonus is that, if we show that a problem is in UniqueEOPL via a chain of promisepreserving reductions, then we automatically get that the promise version of that problem, where it is promised that there are no violations, lies in PromiseUEOPL. Moreover, if we show that a problem is UniqueEOPL-complete via a promise-preserving reduction, then this also implies that the promise version of that problem is PromiseUEOPL-complete.

One-Permutation Discrete Contraction
The One-Permutation Discrete Contraction (OPDC) problem will play a crucial role in our results. We will show that the problem lies in UniqueEOPL, and we will then reduce both PL-Contraction and Grid-USO to OPDC, thereby showing that those problems also lie in UniqueEOPL. We will also show that UniqueEOPL can be reduced to OPDC, making this problem the first example of a non-trivial UniqueEOPL-complete problem.
Direction functions. The OPDC can be seen as a discrete variant of the continuous contraction problem. Recall that a contraction map is a function f : [0, 1] n → [0, 1] d that is contracting under a metric d, i.e., d(f (x), f (y)) ≤ c · f (x, y) for all x, y ∈ [0, 1] d and some constant c satisfying 0 < c < 1. We will discretize the space by overlaying a grid of points on the [0, 1] d cube. Let [k] denote the set {0, 1, . . . , k}. Given a tuple of grid widths (k 1 , k 2 , . . . , k d ), we define the set We will refer to P (k 1 , k 2 , . . . , k d ) simply as P when the grid widths are clear from the context. Note that each point p ∈ P is a tuple (p 1 , p 2 , . . . , p d ), where p i is an integer between 0 and k i , and this point maps onto the point ( Instead of a single function f , in the discrete version of the problem we will use a family of direction functions over the grid P . For each dimension i ≤ d, we have function D i : P → {up, down, zero}. Intuitively, the natural reduction from a contraction map f to a family of direction functions would, for each point p ∈ P and each dimension i ≤ d set: In other words, the function D i simply outputs whether f (p) moves up, down, or not at all in dimension i. So a point p ∈ P with D i (p) = zero for all i would correspond to the fixpoint of f . Note, however, that the grid may not actually contain the fixpoint of f , and so there may be no point p satisfying D i (p) = zero for all i.
A two-dimensional example. To illustrate this definition, consider the two-dimensional instance given in Figure 3, which we will use as a running example. It shows two direction functions: the figure on the left shows a direction function for the up-down dimension, which we will call dimension 1 and illustrate using the color blue. The figure on the right shows a direction function for the leftright dimension, which we will call dimension 2 and illustrate using the color red. Each square in the figures represents a point in the discretized space, and the value of the direction function is shown inside the box. Note that there is exactly one point p where D 1 (p) = D 2 (p) = zero, which is the fixpoint that we seek.
Slices. We will frequently refer to subsets of P in which some of the dimensions have been fixed. A slice will be represented as a tuple (s 1 , s 2 , . . . , s d ), where each s i is either • a number in [0, 1], which indicates that dimension i should be fixed to s i , or • the special symbol * , which indicates that dimension i is free to vary.
We define Slice d = ([0, 1] ∪ { * }) d to be the set of all possible slices in dimension d. Given a slice s ∈ Slice d , we define P s ⊆ P to be the set of points in that slice, ie., P s contains every point p ∈ P such that p i = s i whenever s i = * . We'll say that a slice s ′ ∈ Slice d is a sub-slice of a slice s ∈ Slice d if s j = * =⇒ s ′ j = s j for all j ∈ [d]. An i-slice is a slice s for which s j = * for all j ≤ i, and s j = * for all j > i. In other words, all dimensions up to and including dimension i are allowed to vary, while all other dimensions are fixed.
In our two-dimensional example, there are three types of i-slices. There is one 2-slice: the slice ( * , * ) that contains every point. For each x, there is a 1-slice ( * , x), which restricts the left/right dimension to the value n. For each pair x, y there is a 0-slice (y, x), which contains only the exact point corresponding to x and y.
Discrete contraction maps. We can now define a one-permutation discrete contraction map. We say that a point p ∈ P s in some slice s is a fixpoint of s if D i (p) = zero for all dimensions i where s i = * . The following definition captures the promise version of the problem, and we will later give a non-promise version by formulating appropriate violations.
Definition 13 (One-Permutation Discrete Contraction Map). Let P be a grid of points over [0, 1] d and let D = (D i ) i=1,...,d be a family of direction functions over P . We say that D and P form a one-permutation discrete contraction map if, for every i-slice s, the following conditions hold.
1. There is a unique fixpoint of s.
2. Let s ′ ∈ Slice d be a sub-slice of s where some coordinate i for which s i = * has been fixed to a value, and all other coordinates are unchanged. If q is the unique fixpoint of s, and p is the unique fixpoint of s ′ , then The first condition specifies that each i-slice must have a unique fixed point. Since the slice ( * , * , . . . , * ) is an i-slice, this implies that the full problem also has a unique fixpoint.
The second condition is a more technical condition. It tells us that if we have found the unique fixpoint p of the (i + 1)-slice s ′ , and if this point is not the unique fixpoint of the i-slice s, then the direction function D i (p) tells us which way to walk to find the unique fixpoint of s. This is a crucial property that we will use in our reduction from OPDC to UniqueEOPL, and in our algorithms for contraction maps.
In our two-dimensional example, the first condition requires that every slice ( * , x) has a unique fixpoint, and this corresponds to saying that for every fixed slice of the left/right dimension, there is a unique blue point that is zero. The second condition requires that, if we are at some blue zero, then the red direction function at that point tells us the direction of the overall fixpoint. It can be seen that our example satisfies both of these requirements.
Note that both properties only consider i-slices. In the continuous contraction map problem with an L p norm distance metric, every slice has a unique fixpoint, and so one may expect a discrete version of contraction to share this property. The problem is that the second property is very difficult to prove. Indeed, when we reduce PL-Contraction to OPDC in Section 4.2, we must carefully choose the grid size to ensure that both the first and second properties hold. In fact, our choice of grid size for dimension i will depend on the grid size of dimension i + 1, which is why our definition only considers i-slices.
The name One-Permutation Discrete Contraction was chosen to emphasize this fact. The islices correspond to restricting dimensions in order, starting with dimension d. Since the order of the dimensions is arbitrary, we could have chosen any permutation of the dimensions, but we must choose one of these permutations to define the problem.
The OPDC problem. The OPDC problem is as follows: given a discrete contraction map D = (D i (p)) i=1,...,d , find the unique point p such that D i (p) = zero for all i. Note that we cannot efficiently verify whether D is actually a one-permutation discrete contraction map.
So, the OPDC problem is a promise problem, and we will formulate a total variant of it that uses a set of violations to cover the cases where D fails to be a discrete contraction map.
(OV1) An i-slice s and two points p, q ∈ P s with p = q such that D j (p) = D j (q) = zero for all j ≤ i.
(OV2) An i-slice s and two points p, q ∈ P s such that (OV3) An i-slice s and a point p ∈ P s such that • D j (p) = D j (q) = zero for all j < i, and either Solution type (O1) encodes a fixpoint, which is the proper solution of the discrete contraction map, while solution types (OV1) through (OV3) encode violations of the discrete contraction map property. Solution type (OV1) witnesses a violation of the first property of a discrete contraction map, namely that each i-slice should have a unique fixpoint. A solution of type (OV1) gives two different points p and q in the same i-slice that are both fixpoints of that slice.
Solutions of type (OV2) witness violations of the first and second properties of a discrete contraction map. In these solutions we have two points p and q that are both fixpoints of their respective (i − 1)-slices and are directly adjacent in an i-slice s. If there is a fixpoint r of the slice s, then this witnesses a violation of the second property of a discrete contraction map, which states that D i (p) and D i (q) should both point towards r, and clearly one of them does not. On the other hand, if slice s has no fixpoint, then p and q also witness this fact, since the fixpoint should be in-between p and q, which is not possible.
Solutions of type (OV3) consist of a point p that is a fixpoint of its (i − 1)-slice but D i (p) points outside the boundary of the grid. These are clear violations of the second property, since D i (p) should point towards the fixpoint of the i-slice containing p, but that fixpoint cannot be outside the grid.
It is perhaps not immediately obvious that OPDC is a total problem. Ultimately we will prove this fact in the next section by providing a promise-preserving reducing from OPDC to UniqueEOPL. This will give us a proof of totality, and will also prove that, if the discrete contraction map has no violations, then it does indeed have a unique solution.

One-Permutation Discrete Contraction is in UniqueEOPL
In this section, we will show that One-Permutation Discrete Contraction lies in UniqueEOPL under promise-preserving reductions.
UFEOPL. Our reduction will make use of an intermediate problem that we call unique forward EOPL, which is a version of UniqueEOPL in which we only have a successor circuit S, meaning that no predecessor circuit P is given.
Without the predecessor circuit, this problem bears more resemblance to SinkOfDag than to EndOfPotentialLine. As in SinkOfDag, a bit-string x encodes a vertex if and only if S(x) = x, and an edge exists between vertices x and y if and only if S(x) = y and V (x) < V (y). The proper solution type (UF1) asks us to find a vertex that is a sink of the DAG, just as before.
The difference lies in the violation solution type (UFV1), which is the same as violation type (UV3) of UniqueEOPL. It asks for two vertices x and y that either have the same potential, or for which the potential of y lies strictly between the potential of x and the potential of S(x). Note that this restriction severely constrains a SinkOfDag instance: if there are no violation solutions, then the DAG must consist of a single line that starts at 0 n , and ends at the unique solution of type (UF1). So in this sense, the problem really does capture instances of UniqueEOPL that lack a predecessor circuit.
The UniqueForwardEOPL problem will play a crucial role in our reduction. We will reduce OPDC to it, and we will then reduce it to UniqueEOPL.
An illustration of the reduction. Before we discuss the formal definition of the construction, we first give some intuition by describing the reduction for the two-dimensional example shown in Figure 3.
The reduction uses the notion of a surface. On the left side in Figure 4, we have overlaid the surfaces of the two direction functions from Figure 3. The surface of a direction function D i is exactly the set of points p ∈ P such that D i (p) = zero. The fixpoint p that we seek has D i (p) = zero for all dimensions i, and so it lies at the intersection of these surfaces.
To reach the overall fixpoint, we walk along a path starting from the bottom-left corner, which is shown on the right-hand side of Figure 4. The path begins by walking upwards until it finds the blue surface. Once it has found the blue surface, it then there are two possibilities: either we have found the overall fixpoint, in which case the line ends, or we have not found the overall fixpoint, and the red direction function tells us that the direction of the overall fixpoint is to the right.
If we have not found the overall fixpoint, then we move one step to the right, go back to the bottom of the diagram, and start walking upwards again. We keep repeating this until we find the overall fixpoint. This procedure gives us the line shown on the right-hand side of Figure 4.
The potential. How do we define a potential for this line? Observe that the dimension-two coordinates of the points on the line are weakly monotone, meaning that the line never moves to the left. Furthermore, for any dimension-two slice (meaning any slice in which the left/right coordinate is fixed), the dimension-one coordinate is monotonically increasing. So, if p = (p 1 , p 2 ) denotes any point on the line, if k denotes the maximum coordinate in either dimension, then the function is a function that monotonically increases along the line, which we can use as a potential function.
Uniqueness. To provide a promise-preserving reduction to UFEOPL, we must argue that the line is unique whenever the OPDC instance has no violations. Here we must carefully define what exactly a vertex on the line actually is, to ensure that no other line can exist. Specifically, we must be careful that only points that are to the left of the fixpoint are actually on the line, and that no "false" line exists to the right of the fixpoint.
Here we rely on the following fact: if the line visits a point with coordinate x in dimension 2, then it must have visited the point p on the blue surface in the slice defined by x − 1. Moreover, for that point p we must have D 2 (p) = up, which means that it is to the left of the overall fixpoint.
Using this fact, each vertex on our line will be a pair (p, q), where p is the current point that we are visiting, and q is either • the symbol −, indicating that we are still in the first column of points, and we have never visited a point on the blue surface, or • a point q that is on the blue surface, that satisfies q 2 = p 2 − 1 and D 2 (q) = up.
Hence the point q is always the last point that we visited on the blue surface, which provides a witness that we have not yet walked past the overall fixpoint. When we finish walking up a column of points, and find the point on the blue surface, we overwrite q with the new point that we have found. This step is the reason why only a successor circuit can be given for the line, since the value that is overwritten cannot easily be computed by a predecessor circuit.
Violations. Our two-dimensional OPDC example does not contain any violations, but our reduction can still handle all possible violations in the OPDC instance. At a high level, there are two possible ways in which the reduction can go wrong if there are violations.
1. It is possible, that as we walk upwards in some column, we do not find a fixpoint, and our line will get stuck. This creates an end of line solution of type (UF1), which must be mapped back to an OPDC violation. In our two-dimensional example, this case corresponds to a column of points in which there is no point on the blue surface. However, if there is no point on the blue surface, then we will either • find two adjacent points p and q in that column with D 1 (p) = up and D 2 (p) = down, which is a solution of type (OV2), or • find a point p at the top of the column with D 1 (p) = up, or a point q at the bottom of the column with D 1 (q) = down. Both of these are solutions of type (OV3).
There is also the similar case where we walk all the way to the right without finding an overall fixpoint, in which case we will find a point p on the right-hand boundary that satisfies D 1 (p) = zero and D 2 (p) = up, which is a solution of type (OV3).
2. The other possibility is that there may be more than one point on the blue surface in some of the columns. This will inevitably lead to multiple lines, since if q and q ′ are both points on the blue surface in some column, and p is some point in the column to the right of p and q, then (p, q) and (p, q ′ ) will both be valid vertices on two different lines.
These can show up as violations of type (UFV1), which we map back to solutions of type (OV1). Specifically, the points p and q, which are given as part of the two vertices, are both fixpoints of the same slice, which is exactly what (OV1) asks for.
We can argue that our reduction is promise-preserving. This is because violation solutions in the UFEOPL instance are never mapped back to proper solutions of the OPDC instance. This means that, if we promise that the OPDC instance has no violations, then the resulting UFEOPL instance must also contain no violations.
The full reduction. Our reduction from OPDC to UniqueForwardEOPL generalizes the approach given above to d dimensions. We say that a point p ∈ P is on the i-surface if D j (p) = zero for all j ≤ i. In our two-dimensional example we followed a line of points on the one-surface, in order to find a point on the two-surface. In between any two points on the one-surface, we followed a line of points on the zero-surface (every point is trivially on the zero-surface). Our line will visit a sequence of points on the (d − 1)-surface in order to find the point on the d-surface, which is the fixpoint. Between any two points on the (d − 1)-surface the line visits a sequence of points on the (d − 2)-surface, between any two points on the (d − 2)-surface the line visits a sequence of points on the (d − 3)-surface, and so on.
The line will follow the same pattern that we laid out in two dimensions. Every time we find a point on the i-surface, we remember it, increment our position in dimension i by 1, and reset our coordinates back to 0 for all dimensions j < i. Hence, a vertex will be a tuple (p 0 , p 1 , . . . , p d ), where each p i is either • the symbol −, indicating that we have not yet encountered a point on the i-surface, or • the most recent point on the i-surface that we have visited. This is a generalization of the witnessing scheme that we saw in two-dimensions.
The potential is likewise generalized so that the potential of a point p is proportional to d i=1 k i p i , where again k is some constant that is larger than the grid size. This means that progress in dimension i dominates progress in dimension j whenever j < i, which allows the potential to monotonically increase along the line.
We are also able to deal with all possible violations, using the ideas that we have described in two-dimensional case. Full details of this construction are given in Appendix B.1, where the following lemma is proved.
There is a polynomial-time promise-preserving reduction from OPDC to UniqueFor-wardEOPL.
From UniqueForwardEOPL to UniqueForwardEOPL+1. The next step of the reduction is to slightly modify the UniqueForwardEOPL instance, so that the potential increases by exactly one in each step. Specifically we define the following problem There are two differences between this problem and UniqueForwardEOPL. Firstly, an edge exists between x and y if and only if S(x) = y, and V (y) = V (x) + 1, and this is reflected in the modified definition of solution type (UFP1). Secondly, solution type (UFPV1) has been modified to only cover the case where we have two vertices x and y that have the same potential. The case where V (x) < V (y) < V (S(x)) is not covered, since in this setting this would imply V (S(x)) > V (x) + 1, which already gives us a solution of type (UFP1).
It is not difficult to reduce UniqueForwardEOPL to UniqueForwardEOPL+1, using the same techniques that we used in the reduction from EndOfPotentialLine to EndOfMetered-Line in Theorem 10. This gives us the following lemma, which is proved in Appendix B.2.
Lemma 18. There is a polynomial-time promise-preserving reduction from UniqueForwardEOPL to UniqueForwardEOPL+1.
UniqueForwardEOPL+1 to UniqueEOPL. The final step of the proof is to reduce Uniq-ueForwardEOPL to UniqueEOPL. For this, we are able to build upon existing work. The following problem was introduced by Bitansky et al [4].
Definition 19 (SinkOfVerifiableLine [4]). The input to the problem consists of a starting vertex x s ∈ {0, 1} n , a target integer T ≤ 2 n , and two boolean circuits S : SinkOfVerifiableLine is intuitively very similar to UniqueForwardEOPL. In this problem, a single line is encoded, where as usual the vertices are encoded as bit-strings, and the circuit S gives the successor of each vertex. The difference in this problem is that the circuit W gives a way of verifying how far along the line a given vertex is. Specifically, W (x, i) = 1 if and only if x is the ith vertex on the line. Note that this is inherently a promise problem, since if W (x, i) = 1 for some i, we have no way of knowing whether x is actually i steps along the line, without walking all of those steps ourselves.
It was shown by Hubáček and Yogev [45] that SinkOfVerifiableLine can be reduced in polynomial time to EndOfMeteredLine, and hence also to EndOfPotentialLine (via Theorem 10). Moreover, the resulting EndOfPotentialLine instance has a unique line, so this reduction also reduces SinkOfVerifiableLine to UniqueEOPL. It is easy to reduce the promise version of UniqueForwardEOPL+1 to SinkOfVerifiableLine, since we can implement the circuit W so However, the existing work only deals with the promise problem. Our contribution is to deal with violations. We show that, if one creates a SinkOfVerifiableLine instance from a Unique-ForwardEOPL+1 instance, in the way described above, and applies the reduction of Hubáček and Yogev to produce a UniqueEOPL instance, then any violation can be mapped back to a solution in the original UniqueForwardEOPL+1 instance. Hence, we show the following lemma, whose full proof appears in Appendix B.3.

Lemma 20.
There is a polynomial-time promise-preserving reduction from UniqueForwardEOPL to UniqueEOPL.
This completes the chain of promise-preserving reductions from OPDC to UniqueEOPL. Hence, we have shown the following theorem.

One-Permutation Discrete Contraction is UniqueEOPL-hard
In this section we will show that One-Permutation Discrete Contraction is UniqueEOPL-complete, by giving a hardness result. Specifically, we give a reduction from UniqueEOPL to OPDC.
Modifying the line. The first step of the reduction is to slightly alter the UniqueEOPL instance. Specifically, we would like to ensure the following two properties.
1. Every edge increases the potential by exactly one. That is, V (S(x)) = V (x) + 1 for every vertex x.
2. The line has length exactly 2 n for some integer n. More specifically, we ensure that if x is the end of any line then we have V (x) = 2 n − 1. The start of the line given in the problem has potential 0, this ensures that the length of that line is exactly 2 n , although other lines may be shorter.
We have already developed a technique for ensuring the first property in the reduction from EOPL to EOML in Theorem 10, which can be reused here. Specifically, we introduce a chain of dummy vertices between any pair of vertices x and y with S(x) = y and V (y) > V (x) + 1. The second property can be ensured by choosing n so that 2 n is larger than the longest possible line in the instance. Then, at every vertex x that is the end of a line, we introduce a chain of dummy vertices The vertex e = (x, 2 n − V (x) − 1) will be the new end of the line, and note that V (e) = 2 n − 1 as required. The full details of this are given in Appendix C.1, where the following lemma is shown.
Lemma 22. Given a UniqueEOPL instance L = (S, P, V ), there is a polynomial-time promisepreserving reduction that produces a UniqueEOPL instance L ′ = (S ′ , P ′ , V ′ ), where • For every x and y with y = S(x) and x = P (y) we have V (y) = V (x) + 1, and • There exists an integer n such that, x is a solution of type (U1) if and only if we have For the remainder of this section, we will assume that we have a UniqueEOPL instance L = (S, P, V ) that satisfies that two extra conditions given by Lemma 22. We will use m to denote the bit-length of a vertex in L.
The set of points. We create an OPDC instance over a boolean hypercube with m·n dimensions, so our set of points is P = {0, 1} mn . We will interpret each point p ∈ P as a tuple (v 1 , v 2 , . . . , v n ), where each v i is a bit-string of length m, meaning that each v i can represent a vertex in L.
To understand the reduction, it helps to consider the case where there is a unique line in L. We know that this line has length exactly 2 n . The reduction repeatedly splits this line into two equal parts.
Observe that L 1 and L 2 both contain exactly 2 n−1 vertices. The idea is to embed L 1 and L 2 into different sub-cubes of the {0, 1} mn point space. The line that we embed will be determined by the last element of the tuple. Let v be the vertex satisfying V (v) = 2 n−1 , meaning v is the first element of L 2 .
Note that this means that we embed a single copy of L 2 , but many copies of L 1 . Specifically, there are 2 m possibilities for the final element of the tuple. One of these corresponds to the sub-cube containing L 2 , while 2 m − 1 of them contain a copy of L 1 .
The construction is recursive. So we split L 2 into two lines L 2,1 and L 2,2 , each containing half of the vertices of L 2 . If w is the vertex satisfying V (w) = 2 n−1 + 2 n−2 , which is the first vertex of L 2,2 , then we embed a copy of L 2,2 into the sub-cube ( * , * , . . . , * , w, v), where v is the same vertex that we used above, and we embed a copy of L 2,1 into each sub-cube ( * , * , . . . , * , u, v), where u = w. Likewise L 1 is split into two, and embedded into the sub-cubes of ( * , * , . . . , * , u) whenever u = v.
Given a vertex (v 1 , v 2 , . . . , v n ), we can view the bit-string v n as choosing either L 1 or L 2 , based on whether V (v n ) = 2 n−1 . Once that decision has been made, we can then view v n−1 as choosing one half of the remaining line. Since the original line L has length 2 n , and we repeat this process n times, this means that at the end of the process we will be left with a line containing a single vertex. So in this way, a point (v 1 , v 2 , . . . , v n ) is a representation of some vertex in L, specifically the vertex that is left after we repeatedly split the line according to the choices made by v n through v 1 .
We can compute the vertex represented by any tuple in polynomial time. Moreover, given a slice ( * , * , . . . , * , v i , v i+1 , . . . , v n ) that fixes elements i through n of the tuple, we can produce, in polynomial time, a UniqueEOPL instance corresponding to the line that is embedded in that slice. This is formalised in the following lemma, whose proof is given in Appendix C.2.

Lemma 23.
There are polynomial-time algorithms for computing the following two functions.
• The function decode(v 1 , v 2 , . . . , v n ) which takes a point in P and returns the corresponding vertex of L.
Although we have described this construction in terms of a single line, the two polynomial time algorithms given by Lemma 23 are capable of working with instances that contain multiple lines. In the case where there are multiple lines, there may be two or more bit-strings x and y with V (x) = V (y) = 2 n−1 . In that case, we will embed second-half instances into ( * , * , . . . , x) and ( * , * , . . . , y). This is not a problem for the functions decode and subline, although this may lead to violations in the resulting OPDC instance that we will need to deal with.
The direction functions. The direction functions will carry out the embedding of the lines. Since our space is {0, 1} mn , we will need to define m · n direction functions D 1 through D mn .
The direction functions D m(n−1)+1 through D mn correspond to the bits used to define v n in a point (v 1 , v 2 , . . . , v n ). These direction functions are used to implement the transition between the first and second half of the line. For each point p = (v 1 , v 2 , . . . , v n ) we define these functions using the following algorithm.
1. In the case where V (v n ) = 2 n−1 , meaning that decode(p) is a vertex in the first half of the line, then there are two possibilities.
(a) If V (decode(p)) = 2 n−1 − 1, meaning that p is the last vertex on the first half of the line, then we orient the direction function of dimensions (n − 1) · m through m towards the bit-string given by S(decode(p)). This captures the idea that once we reach the end of the first half of the line, we should then move to the second half, and we do this by moving towards the sub-cube ( * , * , . . . , * , S(decode(p)). So for each i in the range m(n − 1) (b) If the rule above does not apply, then we orient everything towards 0. Specifically, we set This is an arbitrary choice: our reduction would work with any valid direction rules in this case.
2. If V (v n ) = 2 n−1 , then we are in the second half of the line. In this case we set D i (p) = zero for all dimensions i in the range (n − 1) · m ≤ i ≤ m. This captures the idea that, once we have entered the second half of the line, we should never leave it again.
We use the same idea recursively to define direction functions for all dimensions i ≤ m(n − 1). This gives us a family of polynomial-time computable direction functions D = (D i ) i=1,...,mn . The full details can be found in Appendix C.3.
The proof. We have now defined P and D, so we have an OPDC instance. We must now argue that the reduction is correct. The intuitive idea is as follows. If we are at some point p ∈ P , and decode(p) = v is a vertex that is not the end of a line, then there is some direction function D i such that D i (p) = zero. We can see this directly for the case where V (decode(p)) = 2 n−1 − 1, since the direction functions on dimensions m(n−1)+1 through mn will be oriented towards S(V (decode(p)), and so p will not be a solution.
As we show in the proof, the same property holds for all other vertices in the middle of the line. The end of the line will be a solution, because it will be encoded by the point where each v i is the first vertex on the second half of the line embedded in the sub-cube. Our direction functions ensure that D j (p) = zero for all j for this point.
Our reduction must also deal with violations. Violations of type (OV3) are impossible by construction. In violations of type (OV1) and (OV2) we have two points p and q that are in the same i-slice. Here we specifically use the fact that violations can only occur within i-slices. Note that an i-slice will fix the last mn − i bits of the tuple (v 1 , v 2 , . . . , v n ), which means that there will be an index j such that all v l with l > j are fixed. This allows us to associate the slice with the line L ′ = subline(v j+1 , v j+2 , . . . , v n ), and we know that both p and q encode vertices of L ′ . In both cases, we are able to recover two vertices in L ′ that have the same potential, and these vertices also have the same potential in L. So we get a solution of type (UV3). The details are rather involved, and we defer the proof to Appendix C.4, where the following lemma is proved.
Lemma 24. There is a polynomial-time promise-preserving reduction from UniqueEOPL to OPDC.
Thus, we have shown the following theorem.
Theorem 25. OPDC is UniqueEOPL-complete under promise-preserving reductions, even when the set of points P is a hypercube.
Since we have shown bidirectional promise-preserving reductions between OPDC and UniqueEOPL, we also get that the promise version of OPDC is complete for PromiseUEOPL.

Unique Sink Orientations
Unique sink orientations. Let C = {0, 1} n be an n-dimensional hypercube. An orientation of C gives a direction to each edge of C. We formalise this as a function Ψ : C → {0, 1} n , that assigns a bit-string to each vertex of C, with the interpretation that the i-th bit of the string gives an orientation of the edge in dimension i. More precisely, for each vertex v ∈ C and each dimension i, let u be the vertex that is adjacent to v in dimension i.
• If Ψ(v) i = 0 then the edge between v and u is oriented towards v.
• If Ψ(v) i = 1 then the edge between v and u is oriented towards u.
Note that this definition does not insist that v and u agree on the orientation of the edge between them, meaning that Ψ(v) i and Ψ(u) i may orient the edge in opposite directions. However, this will be a violation in our set up, and a proper orientation should be thought of as always assigning a consistent direction to each edge.
A face is a subset of C in which some coordinates have been fixed. This can be defined using the same notation that we used for slices in OPDC. So a face f = (f 1 , f 2 , . . . , f n ), where each f i is either 0, 1, or * , and the sub-cube defined by f contains every vertex A unique sink orientation (USO) is an orientation in which every face has a unique sink. Since f = ( * , * , . . . , * ) is a face, this also implies that the whole cube has a unique sink, and the USO problem is to find the unique sink.
Placing the problem in TFNP. The USO property is quite restrictive, and there are many orientations that are not USOs. Indeed, the overall cube may not have a sink at all, or it may have multiple sinks, and this may also be the case for the other faces of the cube. Fortunately, Szabó and Welzl have pointed out that if an orientation is not a USO, then there is a succinct witness of this fact [74].
Let v, u ∈ C be two distinct vertices. We have that Ψ is a USO if and only if there exists some dimension i such that v i = u i and Ψ(v) i = Ψ(u) i . Put another way, this means that if we restrict the orientation only to the sub-cube defined by any two vertices v and u, then the orientations of v and u must be different on that sub-cube. If one uses ⊕ to denote the XOR operation on binary strings, and ∩ to denote the bit-wise and-operation, then this condition can be written concisely as Note that this condition also ensures that the orientation is consistent, since if v and u differ in only a single dimension, then the conditions states that they must agree on the orientation of the edge between them.
We use this condition to formulate the USO problem as a problem in TFNP.
Note that this formulation of the problem allows the orientation function to decline to give an orientation for some vertices, and this is indicated by setting Ψ(v) = −. Any such vertex is a violation of type (USV1). While this adds nothing interesting to the USO problem, we will use this in Section 4.3 when we reduce P-LCP to USO, since in some cases the reduction may not be able to produce an orientation at a particular vertex.
Assuming that there are no violations of type (USV1), it is easy to see that the problem is total. This is because every USO has a sink, giving a solution of type (US1), while every orientation that is not a USO has a violation of type (USV2).
Placing the problem in UniqueEOPL. We show that the problem lies in UniqueEOPL by providing a promise-preserving reduction from Unique-Sink-Orientation to OPDC. The reduction is actually not too difficult, because when the point set for the OPDC instance is a hypercube, the OPDC problem can be viewed as a less restrictive variant of USO. Specifically, USO demands that every face has a unique sink, while OPDC only requires that the i-slices should have unique sinks.
The reduction creates an OPDC instances on the same set of points, meaning that P = C. The direction functions simply follow the orientation given by Ψ. Specifically, for each v ∈ P and each dimension i we define To prove that this is correct, we show that every solution of the OPDC instance can be mapped back to a solution of the USO instance. Any fixpoint of the OPDC instance satisfies D i (v) = zero for all i, which can only occur if v is a sink, or if Ψ(v) = −. The violation solutions of OPDC can be used to generate a pair of vertices that constitute a (USV2) violation. We defer the details to Appendix D, where the following lemma is proved.

Lemma 27.
There is a polynomial-time promise-preserving reduction from Unique-Sink-Orientation to OPDC.
Thus we have shown the following theorem.

Piecewise Linear Contraction Maps
In this section, we show that finding a fixpoint of a piecewise linear contraction map lies in UniqueEOPL. Specifically, we study contraction maps where the function f is given as a LinearFIXP circuit, which is an arithmetic circuit comprised of max, min, +, −, and ×ζ (multiplication by a constant) gates [22]. Hence, a LinearFIXP circuit defines a piecewise linear function.

Violations.
Not every function f is contracting, and the most obvious way to prove that f is not contracting is to give a pair of points x and y that satisfy f (x) − f (y) p > c · x − y p , which directly witness the fact that f is not contracting.
However, when we discretize Contraction in order to to reduce it to OPDC, there are certain situations in which we have a convincing proof that f is not contracting, but no apparent way to actually produce a violation of contraction. In fact, the discretization itself is non-trivial, so we will explain that first, and then define the type of violations that we will use.
The reduction. We are given a function f : [0, 1] n → [0, 1] n , that is purported to be contracting with contraction factor c in the ℓ p norm. We will produce an OPDC instance by constructing the point set P , and a family of direction functions D.
The most complex step of the reduction is to produce an appropriate set of points P for the OPDC instance. This means we need to choose integers k 1 , k 2 , through k d in order to define the point set P (k 1 , k 2 , . . . , k d ), where we recall that this defines a grid of integers, where each dimension i can take values between 0 and k i . We will describe the method for picking k 1 through k d after we have specified the rest of the reduction.
The direction functions will simply follow the directions given by f . Specifically, for every point p ∈ P (k 1 , k 2 , . . . , k d ), let p ′ be the corresponding point in [0, 1] n , meaning that p ′ i = p i /k i for all i. For and every dimension i we define the direction function D i so that In other words, the function D i simply checks whether f (p ′ ) moves up, down, or not at all in dimension i. This completes the specification of the family of direction functions D.
We must carefully choose k 1 through k d to ensure that the fixpoint of f is contained within the grid. In fact, we need a stronger property: for every i-slice of the grid, if f has a fixpoint in that i-slice, then it should also appear in the grid. Recall that p ∈ P is a fixpoint of some slice s if D i (p) = zero for every i for which s i = * . We can extend this definition to the continuous function f as follows: where we now interpret s as specifying that x i = s i /k i whenever s i = * . We are able to show the following lemma, whose proof appears in Appendix F.1.
Moreover, the number of bits needed to write down each k i is polynomial in the number of bits needed to write down f .
This lemma states that we can pick the grid size to be fine enough so that all fixpoints of f in all i-slices are contained within the grid. The proof of this is actually quite involved, and relies crucially on the fact that we have access to a LinearFIXP representation of f . From this, we can compute upper bounds on the bit-length of any point that is a fixpoint of f . We also rely on the fact that we only need to consider i-slices, because our proof fixes the grid-widths one dimension at a time, starting with dimension d and working backwards.
The extra violation. The specification of our reduction is now complete, but we have still not fully defined the original problem, because we need to add an extra violation. The issue arises with solutions of type (OV2), where we have an i-slice s and two points p, q in s such that This means that p and q are both fixpoints of their respective slices (i − 1)-slices, and are directly adjacent to each other in dimension i. We are able to show, in this situation, that if f is contracting, then f has a fixpoint for the slice s, and it must lie between p and q. The following lemma is shown in Appendix F.2.
Lemma 31. If f is contracting, and we have two points p and q that are a violation of type (OV2), then there exists a point x ∈ [0, 1] n in the slice s that satisfies all of the following.
• (x − f (x)) j = 0 for all j ≤ i, meaning that x is a fixpoint of the slice s, and • q i < k i · x i < p i , meaning that x lies between p and q in dimension i.
So if we have an (OV2) violation, and if f is contracting, then Lemma 31 implies that there is a fixpoint x of the slice s that lies strictly between p and q in dimension i. However, Lemma 30 says that all fixpoints of s lie in the grid, and since p and q are directly adjacent adjacent in the grid in dimension i, there is no room for x, so it cannot exist. The only way that this contradiction can be resolved is if f not actually contracting.
Hence, (OV2) violations give us a concise witness that f is not contracting. But the points p and q themselves may satisfy the contraction property. While we know that there must be a violation of contraction somewhere, we are not necessarily able to compute such a violation in polynomial time.
To resolve this, we add the analogue of an (OV2) violation to the contraction problem.
(CMV3) An i-slice s and two points x, y ∈ [0, 1] d in s such that where k i is the integer given by Lemma 30 for the LinearFIXP circuit that computes f , and Solution type (CM1) asks us to find a fixpoint of the map f , and there are two types of violation. Violation type (CMV1) asks us to find two points x and y that prove that f is not contracting with respect to the ℓ p norm. Violation type (CMV2) asks us to find a point that f does not map to [0, 1] d . Note that this second type of violation is necessary to make the problem total, because it is possible that f is a contraction map, but the unique fixpoint of f does not lie in [0, 1] d .
Violations of type (CMV3) are the direct translation of (OV2) violations to contraction. Note that if f actually is contracting, and has a fixpoint in [0, 1] n , then no violations can exist. For (CMV3) violations, this fact is a consequence of Lemmas 30 and 31.
Correctness of the reduction. To prove that the reduction is correct, we must show that all solutions of the OPDC instance given by P and D can be mapped back to solutions of the original instance. Solutions of type (O1) give us a point p such that D i (p) = zero for all i, which by definition means that the point corresponding to p is a fixpoint of f . Violations of type (OV1) give us two points that are both fixpoints of the same slice s, which also means that they are both fixpoints of the slice s according to f , and it is not difficult to show that these two points violate contraction in an ℓ p norm. Violations of type (OV3) are points that attempt to leave the [0, 1] d , and so give us a solution of type (CMV2). Violations of type (OV2) map directly to violations of type (CMV3), as we have discussed. So we have the following lemma, which is proved in Appendix F.3.

Lemma 33.
There is a polynomial-time promise-preserving reduction from PL-Contraction to OPDC.

The P-Matrix Linear Complementarity Problem
In this section, we reduce P-LCP to Unique-Sink-Orientation and, separately, P-LCP to UniqueEOPL. Given that we show that Unique-Sink-Orientation reduces to UniqueEOPL (via OPDC) and is thus in UniqueEOPL, our direct reduction from P-LCP to UniqueEOPL is not needed to show that P-LCP is contained in UniqueEOPL. However, by reducing directly we can produce a UniqueEOPL instance with size linear in the size of our P-LCP instance, which is needed to obtain the algorithmic result in Section 5.2.
The direct reduction to UniqueEOPL relies heavily on the application of Lemke's algorithm to P-matrix LCPs, and our reduction to Unique-Sink-Orientation relies on the computation of principal pivot transformations of LCPs. Next we introduce the required concepts. Let [d] denote the set {1, . . . , d}.
Definition 35 (LCP (M, q)). Given a matrix M ∈ R d×d and vector q ∈ R d×1 , find a y ∈ R d×1 s.t.: In general, deciding whether an LCP has a solution is NP-complete [12], but if M is a P-matrix, as defined next, then the LCP (M, q) has a unique solution for all q ∈ R d×1 .
The problem of checking if a matrix is a P-matrix is coNP-complete [15], so we cannot expect to be able to verify that an LCP instance (M, q) is actually defined by a P-matrix M . Instead, we use succinct witnesses that M is not a P-matrix as a violation solution, which allows us to define total variants of the P-LCP problem that lie in TFNP, as first done by Megiddo [57,58]. This approach has previously used to place the P-matrix problem in PPAD and CLS [17,64].
Our paper is about problems with unique solutions. It is well known that a matrix M is a P-matrix if and only if for all q ∈ R d×1 , the LCP (M, q) has a unique solution [14]. However, this characterization of a P-matrix is not directly useful for defining succinct violations: while two distinct solutions would be a succinct violation, there is no corresponding succinct witness for the case of no solutions. Next we introduce the three well-known succinct witnesses for M not being a P-matrix that we will use.
First, we introduce some further required notation. Restating (1), the LCP problem (M, q) seeks a pair of non-negative vectors (y, w) such that: If q ≥ 0, then (y, w) = (0, q) is a trivial solution. We identify a solution (y, w) with the set of components of y that are positive: let α = {i | y i > 0, i ∈ [d]} denote such a set of "basic variables". Going the other way, to check if there is a solution that corresponds to a particular α ⊆ [d], we try to perform a principal pivot transformation, re-writing the LCP by writing certain variables y i as w i , and checking if in this re-written LCP there exists the trivial solution (y ′ , w ′ ) = (0, q ′ ). To that end, we construct an d × d matrix A α , where the ith column of A α is defined as follows. Let e i denote the ith unit column vector in dimension d, and let M ·i denote the ith column of the matrix M .
Then α corresponds to an LCP solution if A α is non-singular and, the "new q" i.e., (A −1 α q), is nonnegative. For a given α ⊆ [n], we define out(α) := − if det(A α ) = 0; note that this will not happen if M is really a P-matrix, but our general treatment here is made to deal with the non-promise problem, in which case a zero determinant will correspond to a violation 4 . If det(A α ) = 0, we define out(α) as a bit-string in {0, 1} d as follows: With this notation, a subset α ⊆ [d] corresponds to a solution of the LCP if out(α) = 0 d . We will use out(α) both to define a succinct violation of the P-matrix property, and in our promise-preserving reduction from P-LCP to Unique-Sink-Orientation. Next, we introduce three types of succinct violations that prove that a matrix M is not a Pmatrix. Let char(v) for v ⊆ [d] denote the characteristic vector of v, i.e., (char(v)) i = 1 if i ∈ v and 0 otherwise. As in the section on USOs, when we write ⊕ and ∩, here we mean the bit-wise XOR and bit-wise and-operations on bit-strings.
Definition 36 (P-LCP violations). For a given LCP (M, q) in dimension d, each of the following provides a polynomial-size witness that M is not a P-matrix: such that the corresponding principal minor is non-positive, i.e., det(M αα ) ≤ 0.
(PV2) A vector x = 0 whose sign is reversed by M , that is, for all The violation PV1 corresponds to the standard definition of a P-matrix as having all positive principal minors. Megiddo [57,58] used this violation to place to place the P-LCP problem in TFNP. The same violation was then used by Papadimitriou to put P-LCP in PPAD, because Lemke's algorithm is a PPAD-type complementary pivoting algorithm, which inspired PPAD, and it will return a non-positive principal minor if it fails to find an LCP solution.
The characterization of P-matrices as those that do not reverse the sign of any non-zero vector, as used for violation PV2, was first discovered by Gale and Nikaido [31]. The final violation, PV3, follows from the work of Stickney and Watson [73], who showed that P-matrix LCPs give rise to USOs, and from the work of Tzabo and Welzl [74,Lemma 2.3], who showed that this condition characterizes that "outmap" of USOs, as discussed in Section 4.1, hence the name of the function being "out".
Several other characterizations of P-matrices are known, some of which would provide alternative succinct violations [14,46]. We have given the violations PV1-PV3 above, since we use the three for our promise preserving reductions from P-LCP to UniqueEOPL and to Unique-Sink-Orientation. In particular, for our reduction from P-LCP to Unique-Sink-Orientation we need violations of types PV1 and PV3, and for our reduction from P-LCP directly to UniqueEOPL we need violations of types PV1 and PV2. It is not immediately apparent how to convert violations of one type to another in polynomial time, and it is conceivable that allowing different sets of violations changes the complexity of the problem. We leave it as further work to further explore this.
In the promise version, we are promised that M is a P-matrix and seek a solution of type (Q1).
Reduction from P-LCP to Unique-Sink-Orientation. We are now ready to present our reduction to USO. The reduction is simple, and we present it in full detail here. For the LCP instance I = (M, q) in dimension d, we produce an instance U of Unique-Sink-Orientation also in dimension d.
We first need to deal with the possibility that q is degenerate. A P-matrix LCP has a degenerate q if A −1 α · q has a zero entry for some α ⊆ [d]. To ensure that this does not present a problem for our reduction, we use a standard technique known as lexicographic perturbation [14,Section 4.2]: In the reduction that follows we assume that we are using such a degeneracy resolution scheme.
The reduction associates each vertex v of the resulting USO with a set a α(v) of basic variables for the LCP, and then uses out(α) as the outmap at v. In detail, for a vertex v ∈ {0, 1} d of U , we define α(v) = {i | v(i) = 1}, and set Ψ(v) = out(α(v)). It immediately follows that: • A solution of type (US1) in U is a solution of type (Q1) in I.
• A solution of type (USV1) in U is a solution of type (PV1) in I.
• A solution of type (USV2) in U is a solution of type (PV3) in I.
If M is actually a P-matrix then U will have exactly one solution of type (US1), and no violation solutions [73]. Thus our reduction is promise preserving, and we obtain the following.
Theorem 38. There is a polynomial-time promise-preserving reduction from P-LCP with violations of type (PV1) and (PV3) to Unique-Sink-Orientation.
Overview of reductions from P-LCP to EndOfPotentialLine and UniqueEOPL. Comparing UniqueEOPL and EndOfPotentialLine we see that: (U1) and (UV2) correspond to (R1), and (UV1) corresponds to (R2), so the only difference is that UniqueEOPL has the extra violation solution (UV3). Thus there is some extra work to do for our reduction to UniqueEOPL, to map (UV3) solutions back to a P-LCP solution.
Our reduction from P-LCP to the two problems produces the same instance. The reduction in both cases is based on Lemke's algorithm, which we describe in detail in Appendix G.1. For EndOfPotentialLine, we only need to use (PV1) violations. For UniqueEOPL, we only need to use (PV2) violations. Next we give a high-level description of the reduction, where for simplicity we just refer to a resulting EndOfPotentialLine instance. Full details of both reductions appear in Appendix G.
Lemke's algorithm introduces to the LCP an extra variable z and an extra positive vector c, called a covering vector. It follows a path along edges of the new LCP polyhedron based on a complementary pivot rule that maintains an almost-complementary solution. In Figure 5, we give an example with c = (2, 1) ⊤ ; in our reduction we take c to be the all ones vector 1. Geometrically, solving an LCP is equivalent to finding a complementary cone, corresponding to a subset of columns of M and the complementary unit vectors, that contains −q. This is depicted on the left in Figure 5, which also shows Lemke's algorithm as inverting a piecewise linear map along the line from −c to is said to be duplicate if y l = 0 as well as w l = 0. The vertices without a duplicate label has z = 0 and correspond to a solution of the LCP. To encode these subsets and the duplicate label, we consider bit strings of length n = 2d that represent vertices in the EndOfPotentialLine instance. The first d bits encode the subset, and bits (d + 1, . . . , 2d) bits encode the duplicate label, where bit (d+ l) is one if l is the duplicate label. Thus, "valid" vertices in EndOfPotentialLine instance can have at most one bit set to one among bits (d + 1) through 2d. We ensure that "invalid" bit configurations form self-loops in the EndOfPotentialLine instance, and hence do not give rise to any solutions.
We use the following key properties of Lemke's algorithm as applied to a P-matrix: Recall that LCP (1) has a trivial solution, namely y = 0, if q ≥ 0. Therefore, wlog assume that min i∈[d] q i < 0. The starting vertex of the Lemke path is the vertex x 0 = (y 0 , w 0 , z 0 ) with y 0 = 0, z 0 = | min i∈[d] q i |, and w = q + z 0 1. So 0 n is a start of line in the EndOfPotentialLine instance, we point the successor of 0 n to the bit configuration corresponding to x 0 . We then follow the line of bit configurations corresponding to the vertices traversed by Lemke's algorithm, updating z in each step. We use (z 0 − z + 1) as the potential function -(z 0 − z) to ensure increasing potential along the line, and +1 to ensure that only 0 n has zero potential. If we start with a P-LCP instance where M is actually a P-matrix then this reduction will produce a single line from x 0 to the solution of the P-LCP, and z will monotonically decrease along this line. The main difficulty of the reduction is dealing with the case where M is not a P-matrix. This may cause Lemke's algorithm to terminate without an LCP solution. Another issue is that, even when Lemke's algorithm does find a solution, z may not decrease monotonically along the line.
In the former case, the first property above gives us a (Q2) solution for the P-LCP problem. In the latter case, we define any point on the line where z increases to be a self-loop, breaking the line at these points. Figure 5 shows an example, where the two vertices at which z increases are turned into self loops, thereby introducing two new solutions before and after the break. Both of these solutions give us a (Q2) solution for the P-LCP instance. The full details of the reduction are involved and appear in Appendix G. It is worth noting that, in the case where the input is actually a P-matrix, the resulting EndOfPotentialLine instance has a unique line, so our reduction in promise preserving. Moreover, our EndOfPotentialLine instance is a valid UniqueEOPL instance, and we can map back all violations so as to obtain the following.
Theorem 39. There are a polynomial-time promise-preserving reduction from P-LCP with violations of type (PV1) to EndOfPotentialLine, and from P-LCP with violations of type (PV2) to UniqueEOPL, and thereby also to EndOfPotentialLine. Both reductions only incur a linear blowup in the size of the instance.

Algorithms for Contraction Maps
An algorithm for PL-Contraction. The properties that we observed in our reduction from PL-Contraction to EndOfPotentialLine can also be used to give polynomial time algorithms for the case where the number of dimensions is constant. In our two-dimensional example, we relied on the fact that each dimension-two slice has a unique point on the blue surface, and that the direction function at this point tells us the direction of the overall fixpoint.
This suggests that a nested binary search approach can be used to find the fixpoint. The outer binary search will work on dimension-two coordinates, and the inner binary search will work on dimension-one coordinates. For each fixed dimension-two coordinate y, we can apply the inner binary search to find the unique point (x, y) that is on the blue surface. Once we have done so, D 2 (x, y) tells us how to update the outer binary search to find a new candidate coordinate y ′ .
This can be generalized to d-dimensional instances, by running d nested instances of binary search. Moreover, our algorithm can detect violations in the course of performing the binary search and is able to produce witnesses to the given function not being a contraction map. Thus, our algorithm solves the non-promise problem PL-Contraction, giving the following theorem, whose proof appears in Appendix H.3.
Theorem 40. Given a LinearFIXP circuit C purporting to encode a contraction map f : [0, 1] d → [0, 1] d with respect to any ℓ p norm, there is an algorithm to find a fixpoint of f or return a pair of points witnessing that f is not a contraction map in time that is polynomial in size(C) and exponential in d.
An algorithm for Contraction. We are also able to generalize this to the more general Contraction problem, where the input is given as an arbitrary (non-linear) arithmetic circuit. Here the key issue is that the fixpoint may not be rational, and so we must find a suitably accurate approximate fixpoint. Our nested binary search approach can be adapted to do this.
Since we now deal with approximate fixpoints, we must cut off each of our nested binary search instances at an appropriate accuracy. Specifically, we must ensure that the solution is accurate enough so that we can correctly update the outer binary search. Choosing these cutoff points turns out to be quite involved, as we must choose different cutoff points depending on both the norm and the level of recursion, and moreover the ℓ 1 case requires a separate proof. Again, the algorithm is able to detect violations of contraction during the course of the binary search, and thus solves the more general problem of either finding a fixpoint when the circuit defines a contraction map, or returning a pair of points that are not contracting. The details of this are deferred to Appendix H.4, where the following theorem is shown.
Actually, our algorithm treats the function as a black-box, and so it can be applied to any contraction map, with Theorem 41 giving the number of queries that need to be made.

Aldous' algorithm for PLCP
Aldous [2] analysed a simple randomized algorithm for solving local search problems. The algorithm randomly samples a large number of candidate solutions and then performs a local search from the best sampled solution. Aldous' algorithm can solve any problem in PLS and thus any problem in UniqueEOPL. In [35] it was noted that, because our reduction from our reduction from P-LCP to UniqueEOPL only incurs a linear blowup, that is, from an LCP in dimension n we produce an UniqueEOPL instance with O(2 n ) vertices, when we apply Aldous' algorithm to the resulting instance, the expected time is 2 n/2 · poly(n) in the worst case, which gives the fastest known running time for a randomized algorithm for P-LCP. Thus, we get the following corollary of Theorem 39.

Corollary 42.
There is a randomized algorithm for P-LCP that runs in expected time O(1.4143 n ).

A Proofs for Section 2: Equivalence of EOPL and EOML
First we recall the definition of EndOfMeteredLine, which was first defined in [45]. It is close in spirit to the problem EndOfLine that is used to define PPAD [64].

A.1 EndOfMeteredLine to EndOfPotentialLine
Given an instance I of EndOfMeteredLine defined by circuits S, P and V on vertex set {0, 1} n we are going to create an instance I ′ of EndOfPotentialLine with circuits S ′ , P ′ , and V ′ on vertex set {0, 1} (n+1) , i.e., we introduce one extra bit. This extra bit is essentially to take care of the difference in the value of potential at the starting point in EndOfMeteredLine and EndOf-PotentialLine, namely 1 and 0 respectively. Let k = n + 1, then we create a potential function The idea is to make 0 k the starting point with potential zero as required, and to make all other vertices with first bit 0 be dummy vertices with self loops. The real graph will be embedded in vertices with first bit 1, i.e., of type (1, u). Here by (b, u) ∈ {0, 1} k , where b ∈ {0, 1} and u ∈ {0, 1} n , we mean a k length bit string with first bit set to b and for each i ∈ [2 : k] bit i set to bit u i .
Valid solutions of EndOfMeteredLine of type T2 and T3 requires the potential to be strictly greater than zero, while solutions of EndOfPotentialLine may have zero potential. However, a solution of EndOfPotentialLine can not be a self loop, so we've added self-loops around vertices with zero potential in the EndOfPotentialLine instance. By construction, the next lemma follows: Lemma 44. S ′ , P ′ , V ′ are well defined and polynomial in the sizes of S, P , V respectively.
Our main theorem in this section is a consequence of the following three lemmas.
Proof. This follows by the construction of V ′ , the second condition in S ′ and P ′ , and third and fourth conditions in S ′ and P ′ respectively. Case I. If S ′ (P ′ (x)) = x = 0 k then we will show that either u is a genuine start of a line other than 0 n giving a T1 type solution of EndOfMeteredLine instance I, or there is some issue with the potential at u giving either a T2 or T3 type solution of I. Since S ′ (P ′ (1, 0 n )) = (1, 0 n ), u = 0 n . Thus if S(P (u)) = u then we get a T1 type solution of I and proof follows. If V (u) = 1 then we get a T2 solution of I and proof follows.
Case II. Similarly, if P ′ (S ′ (x)) = x, then either u is a genuine end of a line of I, or there is some issue with the potential at u. If P (S(u)) = u then we get T1 solution of I. Otherwise, P (S(u)) = u and V (u) > 0. Now as (b, u) is not a self loop and V (u) > 0, it must be the case that S ′ (b, u) = (1, S(u)). However, P ′ (1, S(u)) = (b, u) even though P (S(u)) = u. This happens only when S(u) is a self loop because of V (S(u)) = 0. Therefore, we get V (S(u)) − V (u) < 0, i.e., u is a type T3 solution of I. Proof. Clearly, x = 0 k . Let y = (b ′ , u ′ ) = S ′ (x) = x, and observe that P (y) = x. This also implies that y is not a self loop, and hence b = b ′ = 1 and V (u) > 0 (Lemma 45). Further, y = S ′ (1, u) = (1, S(u)), hence u ′ = S(u). Also, Given that V (u) > 0, u gives a type T3 solution of EndOfMeteredLine.
Theorem 48. An instance of EndOfMeteredLine can be reduced to an instance of EndOfPo-tentialLine in linear time such that a solution of the former can be constructed in a linear time from the solution of the latter.

A.2 EndOfPotentialLine to EndOfMeteredLine
In this section we give a linear time reduction from an instance I of EndOfPotentialLine to an instance I ′ of EndOfMeteredLine. Let the given EndOfPotentialLine instance I be defined on vertex set {0, 1} n and with procedures S, P and V , where V : {0, 1} n → {0, . . . , 2 m − 1}.
Valid Edge. We call an edge u → v valid if v = S(u) and u = P (v).
We construct an EndOfMeteredLine instance I ′ on {0, 1} k vertices where k = n + m. Let S ′ , P ′ and V ′ denotes the procedures for I ′ instance. The idea is to capture value V (x) of the potential in the m least significant bits of vertex description itself, so that it can be gradually increased or decreased on valid edges. For vertices with irrelevant values of these least m significant bits we will create self loops. Invalid edges will also become self loops, e.g., if y = S(x) but P (y) = x then set S ′ (x, .) = (x, .). We will see how these can not introduce new solutions.
In order to ensure V ′ (0 k ) = 1, the V (S(0 n )) = 1 case needs to be discarded. For this, we first do some initial checks to see if the given instance I is not trivial. If the input EndOfPotentialLine instance is trivial, in the sense that either 0 n or S(0 n ) is a solution, then we can just return it. Proof. Since both 0 n and S(0 n ) are not solutions, we have V (0 n ) < V (S(0 n )) < V (S(S(0 n ))), P (S(0 n )) = 0 n , and for u = S(0 n ), S(P (u)) = u and P (S(u)) = u. In other words, 0 n → S(0 n ) → S(S(0 n )) are valid edges, and since V (0 n ) = 0, we have V (S(S(0 n )) ≥ 2.
Let us assume now on that 0 n and S(0 n ) are not solutions of I, and then by Lemma 49, we have 0 n → S(0 n ) → S(S(0 n )) are valid edges, and V (S(S(0 n )) ≥ 2. We can avoid the need to check whether V (S(0)) is one all together, by making 0 n point directly to S(S(0 n )) and make S(0 n ) a dummy vertex.
We first construct S ′ and P ′ , and then construct V ′ which will give value zero to all self loops, and use the least significant m bits to give a value to all other vertices. Before describing S ′ and P ′ formally, we first describe the underlying principles. Recall that in I vertex set is {0, 1} n and possible potential values are {0, . . . , 2 m − 1}, while in I ′ vertex set is {0, 1} k where k = m + n. We will denote a vertex of I ′ by a tuple (u, π), where u ∈ {0, 1} n and π ∈ {0, . . . , 2 m − 1}. Here when we say that we introduce an edge x → y we mean that we introduce a valid edge from x to y, i.e., y = S ′ (x) and x = P (y).
• If u → u ′ valid edge in I then let p = V (u) and p ′ = V (u ′ ) -If p = p ′ then we introduce the edge (u, p) → (u ′ , p ′ ).
• If u = 0 n is the start of a path, i.e., S(P (u)) = u, then make (u, V (u)) start of a path by ensuring P ′ (u, V (u)) = (u, V (u)).
• If u is the end of a path, i.e., P (S(u)) = u, then make (u, V (u)) end of a path by ensuring S ′ (u, V (u)) = (u, V (u)).
Last two bullets above remove singleton solutions from the system by making them self loops. However, this can not kill all the solutions since there is a path starting at 0 n , which has to end somewhere. Further, note that this entire process ensures that no new start or end of a paths are introduced.
As mentioned before, the intuition for the potential function procedure V ′ is to return zero for self loops, return 1 for 0 k , and return the number specified by the lowest m bits for the rest.

If
The fact that procedures S ′ , P ′ and V ′ give a valid EndOfMeteredLine instance follows from construction. The next three lemmas shows how to construct a solution of EndOfPotentialLine instance I from a type T1, T2, or T3 solution of constructed EndOfMeteredLine instance I ′ . The basic idea for next lemma, which handles type T1 solutions, is that we never create spurious end or start of a path. Lemma 51. Let x = (u, π) be a type T1 solution of constructed EndOfMeteredLine instance I ′ . Then u is a type (R1) solution of the given EndOfPotentialLine instance I.
For the remaining cases, let P ′ (S ′ (x)) = x, and let u ′ = S(u). . There is a valid edge from u to u ′ in I. Then we will create valid edges from (u, V (u)) to (S(u), V (S(u)) with appropriately changing second coordinates. The rest of (u, .) are self loops, a contradiction. Similar argument follows for the case when S ′ (P ′ (x)) = x.
The basic idea behind the next lemma is that a T2 type solution in I ′ has potential 1. Therefore, it is surely not a self loop. Then it is either an end of a path or near an end of a path, or else near a potential violation.
Lemma 52. Let x = (u, π) be a type T2 solution of I ′ . Either u = 0 n is start of a path in I (type (R1) solution), or P (u) is an (R1) or (R2) type solution in I, or P (P (u)) is an (R2) type solution in I.
Proof. Clearly u = 0 n , and x is not a self loop, i.e., it is not a dummy vertex with irrelevant value of π. Further, π = 1. If u is a start or end of a path in I then done.
At a type T3 solution of I ′ potential is strictly positive, hence these solutions are not self loops. If they correspond to potential violation in I then we get a type (R2) solution. But this may not be the case, if we made S ′ or P ′ self pointing due to end or start of a path respectively. In that case, we get a type (R1) solution. The next lemma formalizes this intuition.
Lemma 53. Let x = (u, π) be a type T3 solution of I ′ . If x is a start or end of a path in I ′ then u gives a type (R1) solution in I. Otherwise u gives a type (R2) solution of I.
Proof. Since V ′ (x) > 0, it is not a self loop and hence is not dummy, and u = 0 n . If u is start or end of a path then u is a type (R1) solution of I. Otherwise, there are valid incoming and outgoing edges at u, therefore so at x.
If V ((S(x)) − V (x) = 1, then since potential either remains the same or increases or decreases exactly by one on edges of I ′ , it must be the case that V (S(x)) − V (x) ≤ 0. This is possible only when V (S(u)) ≤ V (u). Since u is not an end of a path we do have S(u) = u and P (S(u)) = u. Thus, u is a type T2 solution of I.
Our main theorem follows using Lemmas 50, 51, 52, and 53. Throughout this proof, we will fix D = (D i ) i=1,...,d to be the direction functions, and P = P (k 1 , k 2 , . . . , k d ) to be the set of points used in the OPDC instance. We will produce a UniqueForwardEOPL instance L = (S, V ).
The circuit S. A vertex of the line is a tuple (p 0 , p 1 , p 2 , . . . , p d ), where each p i ∈ P ∪{−} is either a point or a special symbol, −, that is used to indicate an unused element. We use vert = (P ∪{−}) d+1 to denote the set of possible vertices. Only some of the tuples are valid encodings for a vertex. To be valid, a vertex (p 0 , p 1 , p 2 , . . . , p d ) must obey the following rules: 1. If p i = −, then D j (p i ) = zero for all j ≤ i. This means that if p i is a point, then it must be a point on the i-surface.
2. If p i = −, then we must have D i+1 (p i ) = down.
3. If p i = − and p j = − and i < j, then we must have (p i ) j+1 = 0.
We define the function IsVertex : vert → {true, false} that determines whether a given v ∈ vert is a valid encoding of a vertex, by following the rules laid out above. This can clearly be computed in polynomial time. The initial vertex, which will be mapped to the bit-string 0 n , will be (p init , −, . . . , −), where p init = (0, 0, . . . , 0) is the all zeros point in P .
Given a vertex encoding v = (p 0 , p 1 , p 2 , . . . , p d ) ∈ vert, the circuit S carries out the following operations. If IsVertex(v) is false, then S(v) = v, indicating that v is indeed not a vertex. Otherwise, we use the following set of rules to determine the successor of v. Let i be the smallest index such that p i = −.
1. If i = d then our vertex has the form v = (−, . . . , −, p d ), and p d is on the d-surface, meaning that it is a solution to the discrete contraction map. So we set S(v) = v to ensure that this is a solution.

If
This operation overwrites the point in position i + 1 with p i , and sets position i to −. All other components of v are unchanged.
3. If D i+1 (p i ) = zero and i > 0 then let q be the point such that (a) If q is a point in P , then we define S(v) = (q, p 1 , p 2 , . . . , p d ).
(b) Otherwise, we must have that (p i ) i+1 = k i+1 , meaning that p i is the last point of the grid. This means that we have a solution of type (OV3), since the fact that 4. If D i+1 (p i ) = zero and i = 0 then let q be the point such that (q) j = (p 0 ) j for all j > 1, and (q) 1 = (p 0 ) 1 + 1.
(a) If q is in the point set P , then we define S(v) = (q, p 1 , p 2 , . . . , p d ).
(b) If q is not in P , then we again have a solution of type (OV3), since (p 0 ) 1 = k 1 , and D 1 (p 0 ) = up from the fact that IsVertex(v) = true. So we set S(v) = v.
The potential function. To define the potential function, we first define an intuitive potential that uses a tuple of values ordered lexicographically, and then translate this into a circuit V that produces integers. We'll call the tuple of values the lexicographic potential associated with a vertex v, and denote it using LexPot(v). To define the lexicographic potential, we'll need to introduce an auxiliary function Potential : (P ∪ {−}) × {0, . . . , d + 1} → Z given by The lexicographic potential of v = (p 0 , p 1 , . . . , p d ) ∈ vert is the following: Potential(p 1 , 1), . . . , Potential(p d−1 , d − 1)) .
Note that the LexPot(v) ∈ Z d , since the definition ignores p d . Let ≺ d be the ordering on tuples from Z d where they are compared lexicographically from right to left, so that (0, 0) ≺ 2 (1, 0) ≺ 2 (0, 1) ≺ 2 (1, 1), for example. Our potential function will be defined by the tuples given by LexPot and the order ≺ d+1 . We omit the subscript from ≺ whenever it is clear from the context. Given a vertex v = (p 0 , p 1 , . . . , p d ) ∈ vert, let LexPot(v) = (l 0 , . . . , l d−1 ). To translate from lexicographically ordered tuples to integers in a way that preserves the ordering, we pick some integer k such that k > k i for all i, meaning that k is larger than the grid-width used in every dimension, which implies that l j < k for all j. We now take a weighted sum of the coordinates of LexPot(v) where the weight for coordinate i is k i , so that the ith coordinate dominates coordinates 0 through i − 1. The final potential value V : vert → Z is then given by Proof. We begin by considering solutions of type (UF1). Let x = (p 0 , p 1 , . . . , p d ) and let y = S(x), and suppose that x is a (UF1) solution. This means that S(x) = x and either S(y) = y or V (y) ≤ V (x). We first suppose that S(y) = y, and note that in this case we must have IsVertex(x) = true while IsVertex(y) = false. We have the following cases based on the rule used to determine S(x).

If S(x)
is determined by the first rule in the definition of S, then this means that p d = −.
Since IsVertex(x) = true, this means that D i (p d ) = zero for all i, which means that p d is a solution of type (O1).  i. If p i+2 = − then p i and p i+2 are solution of type (OV2). Specifically, this holds because where in particular the fact that D i+1 (p i ) = zero is given by the fact that we are in the second case of the definition of S. ii. If p i+2 = − then this means that (p i ) i+2 = 0. Since D i+2 (p i ) = down this gives us a solution of type (OV3).
(b) If D i+2 (p i ) = down, then we argue that this case is impossible. Specifically, we will show that IsVertex(y) = true, meaning that S(y) = y. To do this, we will prove that the four conditions of IsVertex hold for y. Note that y differs from x only in positions i and i + 1, and that position i of y is −. So we only need to consider the conditions imposed by IsVertex when the point p i is placed in position i + 1.
i. The first condition of IsVertex is that p i should be on the (i + 1)-surface, which is true because the second rule of S explicitly checks that D i+1 (p i ) = zero, while the fact that IsVertex(x) = true guarantees that D j (p i ) = zero for all j < i + 1. ii. The second condition requires that D i+2 (p i ) = down, which is true by assumption. iii. Every constraint imposed by the third and fourth conditions also holds for p i in x, and so the fact that IsVertex(x) = true implies that these conditions hold for y.

If S(x)
is determined by the third rule defining S, then we have two cases. Since the third rule was used, we know that y = (q, p 1 , p 2 , . . . , p d ), with the definition of q being given in the third rule.
(a) If q is not in P , then we have a solution of type (OV3), as described in the algorithm for S.
(b) If q ∈ P and D 1 (q) = down, then we have a solution of type (OV3), since q 1 = 0 by definition.
(c) If q ∈ P and D 1 (q) = down and q ∈ P , then we argue that the case is impossible, and we prove this by showing that IsVertex(y) = true. Note that y differs from x only in the position occupied by q, and so this is the only point for which we need to prove the conditions, since all the other points satisfy the conditions by the fact that IsVertex(x) = true.
i. The first requirement of IsVertex(y) holds trivially, since the only new requirement is that q is on the 0-surface, and every point is on the 0-surface by definition. ii. The second requirement is that D 1 (q) = down, which is true by assumption. iii. The third and fourth conditions place constraints on certain coordinates of q. For coordinates j < i, the third condition requires that q j = 0, which is true by definition, while the fourth condition is inapplicable. For coordinates j ≥ i, the constraints imposed by the third and fourth conditions hold because q j = (p i ) j in these coordinates, and p i also satisfies these constraints.

If S(x)
is determined by the fourth rule, then we have two cases. Let y = (q, p 1 , p 2 , . . . , p d ) be the value of y produced by the fourth rule, where the definition of q is given in that rule.
(a) If q is not in P , then we have a solution of type (OV3), as described in the algorithm for S.
(b) If q ∈ P and D 1 (q) = down then we have a solution of type (OV2). Specifically, the points p 0 and q provide the violation since (q) 1 = (p 0 ) 1 + 1, while D 1 (p 0 ) = up and D 1 (q) = down. The fact that both q and p 0 belong to the same 1-slice is guaranteed by the definition of q.
(c) If q ∈ P and D 1 (q) = down then we again argue that IsVertex(y) = true, making this case impossible. The reasoning is the same as the reasoning used in case 3b.
We now proceed to the case where we have a solution x = (p 0 , p 1 , . . . , p d ) of type (UF1) and the vertex y = S(x) satisfies S(y) = y. In this case, we must have V (y) ≤ V (x). We argue that this is impossible, and again we will do a case analysis based on the rule used to determine the output of the circuit S.
1. If S(x) is determined by the first rule then we have IsVertex(y) = false, which is not possible in this case.
2. If S(x) is determined by the second rule, then we can prove that V (y) > V (x). This is because y differs from x only in positions i and i + 1, and because (p i ) i+1 = (p i+1 ) i+1 + 1 by the fourth rule of IsVertex(x). Hence where we are using the fact that k i+1 > k i · Potential(p i , i). Thus, V (y) > V (x).

If S(x)
is determined by the third rule, then note that we must have q ∈ P . We again argue that V (y) > V (x). Specifically, observe that V (y) = V (x) + Potential(q, 0) = V (x) + 1.

If S(x)
is determined by the fourth rule, then note that we must have q ∈ P , and we also have Finally, we move to the case where we have a solution of type (UFV1). In this case we have two vertices x = (p 0 , p 1 , . . . , p d ) and y = (q 0 , q 1 , . . . , q d ) for which IsVertex(x) = IsVertex(y) = true, and for which V (x) ≤ V (y) < V (S(x)). We will once again perform a case analysis over the possible cases of S.
1. S(x) cannot be determined by the first case in the definition of S, because that case only applies when IsVertex(y) = false.
, meaning that it agrees with LexPot(x) and LexPot(S(x)) on elements i + 1 through d. Furthermore, we must have v ′ i ≥ v i . Let j be the largest index satisfying j ≤ i and v ′ j = v j . Note that such a j must exist, since otherwise we would have x = y, which would contradict the fact that x = y in any solution of type UFV1. We claim that p j and q j form a solution of type (OV1).
Let s be the j-slice that satisfies s l = * for all l ≤ j and s l = (p j ) l for all l > j. The point p j lies in the slice s by definition. We claim that q j also lies in this slice, which follows from the fact that v l = v ′ l for all l > j, combined with the third and fourth properties of IsVertex(x) and IsVertex(y). Note also that (p j ) j = (q j ) j , and hence p j = q j .
Finally, the first property of IsVertex(x) and IsVertex(y) imply that D l (p j ) = zero and D l (q j ) = zero for all l ≤ j. So p j and q j are two distinct fixpoint of the j-slice s, meaning that we have a solution of type (OV1).
3. We claim that S(x) cannot be determined by the third rule in the definition of S. In this case we have LexPot(x) = (0, v 1 , v 2 , . . . , v d ), since the case is only applicable when p 0 = −. Observe that LexPot(S(x)) = (1, v 1 , v 2 , . . . , v d ), and that there is no possible tuple t that satisfies LexPot(x) ≺ t ≺ LexPot (S(x)). This means that we must have V (y) = V (x), but this is only possible if y = x, due to the constraints placed by the third and fourth properties of IsVertex. Hence this case is not possible, since y = x is specifically ruled out in a solution of type UVF1.
4. For similar reasons, we claim that S(x) cannot be determined by the fourth rule. In this case , which again means that there cannot be a tuple t satisfying LexPot(x) ≺ t ≺ LexPot(S(x)). Hence we would have V (x) = V (y) and x = y as in the previous case, which is impossible.
To complete the proof, we note that solutions of type (O1) are only ever mapped onto solutions of type (UF1).
This proves that our reduction from OPDC to UniqueForwardEOPL is correct. The fact that the reduction is promise-preserving follows from the fact that all solutions of type (O1) are mapped onto solutions of type (UF1). Hence, if we promise that there are no violations in the OPDC instance, then the resulting UFEOPL instance can only have solutions of type (UF1). This completes the proof of Lemma 16.

B.2 Proof of Lemma 18
We provide a polynomial-time promise-preserving reduction from UniqueForwardEOPL to Uniq-ueForwardEOPL+1. This reduction uses essentially the same idea as the reduction from End-OfPotentialLine to EndOfMeteredLine given in Theorem 10.
Let L = (S, V ) be an instance of UniqueForwardEOPL, and let n be the bit-length of the strings used to represent vertices in L. If x is a vertex in L such that V (S(x)) > V (x) + 1, then we introduce new vertices between x and S(x), each of which increases in potential by 1.
Formally, our UniqueForwardEOPL+1 instance will be denoted as L ′ = (S ′ , V ′ ). Each vertex in this instance will be a pair (v, i), where v is a vertex from L, and i is an integer between 0 and 2 n . The circuit S ′ is defined in the following way. For each vertex (v, i) we use the following algorithm.
The last three conditions add a new sequence of vertices between any edge (x, y) where V (y) > V (y) + 1. Specifically, this sequence is Any pair (v, i) that is not used in such a sequence has S ′ (v, i) = (v, i), meaning that it is not a vertex.
The potential function is defined as follows. For each vertex (v, i), we use the following algorithm.
This completes the specification of the reduction. Clearly the reduction can be carried out in polynomial time. We now prove that this is a promise-preserving reduction. Proof. We start by considering solutions of type (UFP1). Let x = (v, i) be a vertex, and let y = (u, j) be the vertex with y = S(x). In a solution of type (UFP1) we have that S ′ (x) = x and either S ′ (y) = y or V ′ (y) = V ′ (x) + 1. Hence, there are two cases to consider.
1. If S ′ (y) = y then we must have that u = S(v) and j = 0, since if (v, i) is a vertex, then S ′ will always either give (v, i + 1) or (S(v), 0), and it suffices to note that if So, we have y = (S(v), 0), and S ′ (y) = y. This can only occur in the case where either S(S(v)) = S(v), or V (S(S(v))) < V (S(v)). Both of these cases yield a solution of type (UF1) for L.

2.
We claim that the case V ′ (y) = V ′ (x) + 1 is not possible. Since we have already dealt with the case where S ′ (y) = y, we can assume that y is a vertex. Note that S ′ (x) cannot be determined by cases 1 or 4 of the algorithm, since in those cases x would not be a vertex. If S ′ (x) is determined by case 2 of the algorithm, then we have that V ′ (y) = V ′ (x) + 1 by definition. If S ′ (x) is determined by case 3 of the algorithm, then we have Hence, this case is impossible by construction.
Thus, we have dealt with all possible solutions of type (UFP1). We now consider a violation of type (UFPV1). In this case we have two vertices x = (v, i) and y = (u, j) such that x = y, x = S ′ (x), y = S ′ (y), and V ′ (x) = V ′ (y). We claim that this gives us a solution of type (UFV1). If i = j then V (v) = V (u), and so we have a solution of type (UFV1) in L. Assume, without loss of generality, that i ≤ j. We have where the first inequality arises from the fact that (u, j) is a valid vertex in L ′ . Hence, we have shown that V (u) ≤ V (v) < V (S(u)), which is exactly a solution of type (UFV1) in L.
The lemma implies that our reduction is correct. The fact that it is promise-preserving follows from the fact that solutions of type (UF1) are only ever mapped onto solutions of type (UFP1). Hence, if it is promised that L only has (UF1) solutions, then L ′ must only have (UFP1) solutions. This completes the proof of Lemma 18.

B.3 Proof of Lemma 20
The reduction of Hubáček and Yogev [45] relies on the pebbling game technique that was first applied by Bitansky et al [4]. Let L = (S, W, x s , T ) be an SinkOfVerifiableLine instance. The pebbling game is played by placing pebbles on the vertices of this instance according to the following rules.
• A pebble may be placed or removed from the starting vertex x s at any time.
• A pebble may be placed or removed on a vertex x = x s if and only if there is a pebble on a vertex y with S(y) = x.
Given n pebbles, how far can we move along the line by following these rules? The answer is that we can place a pebble on vertex 2 n − 1 by applying the following optimal strategy. The strategy is recursive. In the base case, we can place a pebble on vertex 2 1 − 1 = 1 by placing our single pebble on the successor of x s . In the recursive step, where we have n pebbles, we use the following approach: 1. Follow the optimal strategy for n − 1 pebbles, in order to place a pebble on vertex 2 n−1 − 1.
2. Place pebble n on the vertex 2 n−1 .
3. Follow the optimal strategy for n − 1 pebbles backwards in order to reclaim the first n − 1 pebbles.
Step 3 above relies on the fact that the pebbling game is reversible, meaning that we can execute any sequence of moves backwards as well as forwards. At the beginning of Step 4, we have a single pebble on vertex 2 n−1 , and we follow the optimal strategy again, but using 2 n−1 as the starting point.
The reduction from UniqueForwardEOPL to UniqueEOPL. To reduce UniqueFor-wardEOPL to UniqueEOPL, we play the optimal strategy for the pebbling game. Note that, since that since every step of the pebbling game is reversible, this gives us a predecessor circuit. We will closely follow the reduction given by Bitansky et al [4] from SinkOfVerifiableLine to EndOfLine. Specifically, we will reduce an instance L = (S, V ) of UniqueForwardEOPL to an instance L ′ = (S ′ , P ′ , V ′ ) of UniqueEOPL.
A vertex in L ′ will be a tuple of pairs ((v 1 , a 1 ), (v 2 , a 2 ), . . . , (v n , a n )) describing the state of the pebbling game. Each v i is a bit-string, while each a i is either • the special symbol −, implying that pebble i is not used and that the bit-string v i should be disregarded, or • an integer such that a i = V (v i ), meaning that pebble i is placed on the vertex v i .
Bitansky et al [4] have produced circuits S ′ and P ′ that implement the optimal strategy of the pebbling game for pebbles encoded in this way. The only slight difference is that they reduce from SinkOfVerifiableLine, but we can apply their construction by creating the circuit W so that W (v, a) = 1 if and only if V (v) = a. We refer the reader to their work for the full definition of these two circuits.
Hubáček and Yogev [45] built upon this reduction by showing that it is possible to give a potential function V ′ for the resulting instance. Specifically, their potential encodes how much progress we have made along the optimal strategy, which it turns out, can be computed purely from the current configuration of the pebbles. Their construction also guarantees that the potential at each vertex always increases by 1, meaning that we have V (S(x)) = V (x) + 1 whenever S(x) and x are both vertices. We refer the reader to their work for the full definition of the circuit V ′ .
Violations. So far, we have a reduction from the promise version of UniqueForwardEOPL to UniqueEOPL, which entirely utilizes prior work. Specifically, every solution of type (U1) in L ′ will map back to a solution of type (UFP1) in L.
Our contribution is to handle the violations, thereby giving a promise-preserving reduction from the non-promise version of UniqueForwardEOPL to the non-promise version of UniqueEOPL.
Lemma 57. Every violation in L ′ can be mapped back to a violation of L.
Proof. There are three types of violation in L ′ .
1. Violations of type (UV1), which are edges where the potential decreases, are not possible, since the reduction of Hubáček and Yogev ensures that V ′ (S ′ (x)) = V ′ (x) + 1 whenever x and S ′ (x) are both vertices.
2. In violations of type (UV2) we have a vertex ((v 1 , a 1 ), (v 2 , a 2 ), . . . , (v n , a n )) that is the start of a second line. This means that, for some reason, we are not able to execute the optimal strategy backwards from this vertex. There are two possibilities (a) The optimal strategy needs to place a pebble on the successor of some vertex v i , but it cannot because v i is the end of a line. This means that either S(v i ) = v i or that V (S(v i )) = V (v i ) + 1, and in either case we have a solution of type (UFP1) for L.
(b) The optimal strategy needs to remove the pebble v i , but it cannot, because it does not have a pebble on a vertex u with S(u) = u. By construction, there will be some pebble v j with a j = a i − 1, but in this case we have S(v j ) = v i . This means that we have two lines, and specifically we have that v i and S(v j ) are two vertices with the same potential, since V (v j ) = a j and V (S(v j )) = V (v j ) + 1. This gives us a solution of type (UFPV1).
3. In violations of type (UV3) we have two distinct vertices x = ((v 1 , a 1 ), (v 2 , a 2 ), . . . , (v n , a n )) and y = ((u 1 , b 1 ), (u 2 , b 2 ), . . . , (u n , b n )) with V ′ (x) ≤ V ′ (y) < V ′ (S ′ (x)). Since the reduction ensures that V ′ (S ′ (x)) = V ′ (x) + 1 this means that x and y have the same potential. The reduction of Hubáček and Yogev ensures that, if two vertices have the same potential, then they refer to the same step of the optimal strategy, meaning that a i = b i for all i. This means that any pair of vertices v i and u i with a i = − is a pair of vertices with V (v i ) = V (u i ), and so a solution of type (UFPV1). To see that such a pair must exist, it suffices to note that the only vertex with a i = − for all i is the start of the line, and there cannot be two distinct vertices with this property.
The lemma above proves that the reduction is correct, but it does not directly prove that it is promise-preserving. Specifically, in case 2a of the proof we show that some violations of (UV2) are mapped back to solutions of type (UFP1). This, however, is not a problem, because we can argue that case 2a of the proof can only occur if there is more than one line in L ′ .
Specifically, if we are at some vertex x = ((v 1 , a 1 ), (v 2 , a 2 ), . . . , (v n , a n )) and P (x) needs to place a pebble on S(v i ), then v i cannot be the furthest point in the pebble configuration, meaning that there is some v j with a j = − and a j > a i . This can be verified by inspecting the recursive definition of the optimal strategy. But note that if v i is the end of a line, and v j is a vertex with V (v j ) > V (v i ), then L ′ must contain more than one line.
This allows us to argue that the reduction is promise-preserving, since if L is promised to have no violations, then it must contain exactly one line, and if L contains exactly one line, then all proper solutions of L are mapped onto proper solutions of L ′ . Thus L ′ will contain no violations. This completes the proof of Lemma 20.
C Proofs for Section 3.2: UniqueEOPL to OPDC

C.1 Proof of Lemma 22
This construction is very similar to the one used in the proof of Lemma 18 given in Appendix B.2, although here we must deal with the predecessor circuit, and ensure that the end of each line has the correct potential. Let L = (S, P, V ) be an instance of UniqueEOPL, and let k be the bit-length of the vertices used in L.
We will create an instance L ′ = (S ′ , P ′ , V ′ ) in the following way. Each vertex in L ′ will be a pair (v, i), where v is a vertex in L, and i is an integer satisfying i ≤ 2 n , where n = k + 1. Hence, a vertex in L ′ can be represented by a bit-string of length n + k.
The circuit S ′ is defined as follows. Given a vertex (v, i), we execute the following algorithm.
is not a vertex in the instance.
is not a vertex.
The last three conditions add a new sequence of vertices between any edge (x, y) where V (y) > V (y) + 1. Specifically, this sequence is (x, 0), whenever x is the end of a line in the original instance. This line has the form

Conditions 2 through 4 introduce a new line starting at
where (x, 2 n − V (x) − 1) is the new end of line. Observe that this line is always non-empty, since The predecessor circuit P ′ walks the line backwards, which is easy to construct, since we can either follow the predecessor circuit of L, or we can walk backwards along any of the lines that we have introduced. Specifically, given a vertex (v, i), we use the following algorithm to implement P ′ .
, meaning that if we are on the new line at a solution, then we walk the line backwards.
, meaning that if we are at (v, 0), then we move to the end of the line between (P (v), 0) and (v, 0).

If
The two properties that we need to satisfy hold by construction for L ′ . Every edge is constructed so that V ′ (S(x)) = V ′ (x) + 1, whenever x is a vertex, and the new lines starting at vertices (v, 0), where v is the end of a line in L, ensure that V ′ (x) = 2 n − 1 if and only if x is the end of a line in L ′ . The following lemma shows that the reduction is correct.
Lemma 58. Every solution of L ′ can be mapped back to a solution of L.
Proof. We enumerate the types of solutions in UniqueEOPL.

A solution of type (UV3)
gives us a pair of vertices x = (v, i) and y = (u, j) such that x = y, and either V (x) = V (y), or V (x) < V (y) < V (S(x)). Note that the latter case is impossible here, since by construction we have V (S(x)) = V (x) + 1 whenever x is a vertex, so we must have V (x) = V (y).
If i = j, then V (v) = V (u), and so v and u form a solution of type (UV3) for L. Otherwise, we will suppose, without loss of generality, that i > j, which means that V (v) < V (u).
, and so we have that v and u form a solution of type (UV3) for L.
The lemma above implies that the reduction is correct. To see that it is promise-preserving, it suffices to note that proper solutions of L are only ever mapped onto proper solutions of L ′ . Therefore, if it is promised that L has no violations, then L ′ will also have no violations. This completes the proof of Lemma 22.

C.2 Proof of Lemma 23
The proof requires us to define two polynomial-time algorithms.
Splitting lines around a vertex. We begin by defining the subline function. We will use two sub-functions that split an instance in two based on a particular vertex. Let L = (S, P, V ) be a UniqueEOPL instance, and v be some vertex of L.
1. We define the function FirstHalf(v, L) to return an UniqueEOPL instance (S ′ , P ′ , V ′ ) by removing every vertex with potential greater than or equal to V (v). Specifically, the S ′ and P ′ circuits will check whether V (x) ≥ 2 n−1 for each vertex x, and if so, then they will set S ′ (x) = P ′ (x) = x, which ensures that x is not on the line. For any vertex x with V (x) < 2 n−1 , we set S ′ (x) = S(x), P ′ (x) = P (x) and V ′ (x) = V (x). Note that S ′ , P ′ , and V ′ can all be produced in polynomial time if we are given (S, P, V ).
2. We define the function SecondHalf(v, L) to return a UniqueEOPL instance (S ′ , P ′ , V ′ ) by removing every vertex with potential strictly less than V (v). For the circuits S ′ and P ′ , this is done in the same way as the previous case, but this time the circuits will check whether V (x) < 2 n−1 . The function V ′ is defined so that V ′ (x) = V (x) − 2 n−1 , thereby ensuring that the potentials are in the range [0, 2 n−1 ). Finally, the string 0 n is remapped to represent the vertex v, which is the start of the second half of the line. In this case, we are able to compute S ′ , P ′ , and V ′ in polynomial time if we are given (S, P, V ) and the bit-string v.
We remark that, although we view these functions as intuitively splitting a line, the definitions still work if L happens to contain multiple lines. Each line in the instance will be split based on the potential of v.
The subline function. The subline function is defined recursively, based on the number of bitstrings that are given to the function. In the base case we are given a line L and zero bit-strings, in which case subline(L) = L.
Note that since FirstHalf and SecondHalf can be computed out in polynomial time, this means that subline can also be computed in polynomial time.
An important property of the reduction is that the output of subline(v i , v i+1 , . . . , v n ) is a UniqueEOPL instance in which the longest possible line has length 2 i−1 . This can be proved by induction. For the base case note that subline(L) allows potentials between 0 and 2 n − 1. Each step of the recursion cuts this in half, meaning that subline(v i , v i+1 , . . . , v n , L) allows potentials between 0 and 2 i − 1. Since each edge in a line must always strictly increase the potential, this means that the longest possible line in subline(v i , v i+1 , . . . , v n , L) has length 2 i−1 . This holds even if the instance has multiple lines.
The decode function. Given a point (v 1 , v 2 , . . . , v n ), let L ′ = subline(v 1 , v 2 , . . . , v n , L). As we have argued above, we know that L ′ is an instance in which the longest possible line has length 2 1−1 = 1. Hence, the starting vertex of L ′ , which is by definition v 1 , is the end of a line. So we set decode(v 1 , v 2 , . . . , v n ) = v 1 . Since subline can be computed in polynomial time, this means that decode can also be computed in polynomial time. This completes the proof of Lemma 23.

C.3 The formal definition of the direction functions.
We will extend the approach given in the main body to all dimensions. Let p = (v 1 , v 2 , . . . , v n ) be a point. For the dimensions j in the range m(i − 1) + 1 ≤ j ≤ mi we use the following procedure to determine D j . Let L ′ = (S ′ , P ′ , V ′ ) = subline(v i+1 , v i+2 , . . . , v n , L).
1. In the case where V ′ (v i ) = 2 i−1 , meaning that decode(p) is a vertex in the first half of L ′ , then there are two possibilities.
(a) If V ′ (decode(p)) = 2 i−1 − 1, meaning that p is the last vertex on the first half L ′ , then we orient the direction function towards the bit-string given by S(decode(p)). So we set if p j = 0 and S(decode(p)) = 1, down if p j = 1 and S(decode(p)) = 0, zero otherwise.
(b) If the rule above does not apply, then we orient everything towards 0 by setting Observe that this definition simply extends the idea presented in the main body to all dimensions, using exactly the same idea.

C.4 Proof of Lemma 24
To prove this lemma, we must map every solution of the OPDC instance given by P and D to a solution of the UniqueEOPL instance L = (S, P, V ). We do this by enumerating the solution types for OPDC.
1. In solutions of type (O1) we have a point p = (v 1 , v 2 , . . . , v n ) such that D i (p) = zero for all i. Since the dimensions corresponding to v n are all zero, we know that decode(p) is in the second half of the line, and since the dimensions corresponding to v n−1 are all zero, we know that decode(p) is in the last quarter of the line. Applying this reasoning for all dimensions allows us to conclude that V (decode(p)) = 2 n − 1. Lemma 22 implies that this can be true if and only if decode(p) is the end of a line, which is a solution of type (U1).
2. In solutions of type (OV1) we have two fixed points of a single i-slice. More specifically, we have an i-slice s and two points p = (v 1 , v 2 , . . . , v n ) and q = (u 1 , u 2 , . . . , u n ) in the slice s with p = q such that D j (p) = D j (q) = zero for all j ≤ i. Let v j be the bit-string that uses dimension i, and let L ′ = (S ′ , P ′ , V ′ ) = subline(v j+1 , v j+2 , . . . , v n , L). Since p and q both lie in s, we know that v l = u l for all l > j. Observe that, since D l (p) = D l (q) = zero for all l ≤ j, we can use the same reasoning as we did in case 1 to argue that v 1 through v j−1 encode a vertex that is at the end of the sub-line embedded into ( * , * , . . . , v j , v j+1 , . . . , v n ), and u 1 through u j−1 encode a vertex that is at the end of the sub-line embedded into ( * , * , . . . , u j , v j+1 , . . . , v n ).
There are multiple cases to consider.
Since p = q, there must be some pair of vertices v l and u l such that v l = u l , and note that since p and q both at the end of their respective sub-lines, we have V ′ (v l ) = V ′ (u l ), which also implies that V (v l ) = V (u l ), which is a solution of type UV3.
(b) If v j = u j , and D l (p) = zero for all l in the range m(i − 1) + 1 ≤ l ≤ mi, then subline(v j , v j+1 , . . . , v n , L) is the second half of L ′ , while subline(u j , u j+1 , . . . , u n , L) is the first half. Since q represents a vertex at the end of the corresponding line, we have that S(decode(q)) is the first vertex on the next half of the L ′ , meaning that V ′ (S(decode(q))) = 2 j−1 . Moreover since p is on the second-half of the line, we have that V ′ (v j ) = 2 j−1 , so we have V (S(decode(q))) = V (v j ). This is a solution of type (UV3) so long as S(decode(q)) = v j . To prove that this is the case, recall that the direction functions for q in the dimensions corresponding to u j always point towards S(decode(q)). Since u j = v j , and since u j and v j lie in the same slice s, we know that they disagree on some dimension l ≤ i. But we also have that D l (q) = zero for all l ≤ i, which can only occur if S(decode(q)) disagrees with v j in some dimension. Hence we have shown that S(decode(q)) = v j .
(c) If v j = u j , and D l (q) = zero, for all l in the range m(i − 1) + 1 ≤ l ≤ mi, then this is entirely symmetric to the previous case.
(d) In the last case, we have v j = u j , an index l 1 in the range m(i − 1) + 1 ≤ l 1 ≤ mi with D l 1 (p) = zero, and an index l 2 in the range m(i − 1) + 1 ≤ l 2 ≤ mi with D l 2 (p) = zero. Thus subline(v j , v j+1 , . . . , v n , L) = subline(u j , u j+1 , . . . , u n , L), meaning that both points lie at the end of the first half of L ′ . Thus the direction functions at p point towards S(decode(p)), while the direction functions at q point towards S(decode(q)). Moreover we have V ′ (S(decode(p))) = V ′ (S(decode(q)), so to obtain a solution of type (UV3), we need to prove that S(decode(p))) = S(decode(q).
This follows from the fact that v j = v u , and v j and v u 's membership in the slice s. This means that they disagree on some dimension l < i, but since D a (p) = D a (q) = zero for all a < i, we must have S(decode(p))) = S(decode(q).
• D j (p) = D j (q) = zero for all j < i, • p i = q i + 1, and • D i (p) = down and D i (q) = up.
As with case 2, let v j be the bit-string that uses dimension i, and let L ′ = subline(v j+1 , v j+2 , . . . , v n ) = subline(u j+1 , u j+2 , . . . , u n ). Since D i (p) = zero and D i (q) = zero, we must have that p and q are both points on the first half of L ′ . Furthermore, decode(p) and decode(q) must both be at the end of the first half, since D l (p) = D l (q) = zero for all l < i. Hence, V ′ (decode(p)) = V ′ (decode(q)) = 2 i−1 − 1, which also implies that V (decode(p)) = V (decode(q)). So to obtain a solution of type (UV3), we just need to prove that decode(p) = decode(q).
To prove this, we observe that the direction function in dimension i should be oriented towards S(decode(p)) for the point p, and S(decode(q)) for the point q. However, since D i (p) and D i (q) both point away from their points, and since p i = q i , this must mean that S(decode(p)) = S(decode(q)), which also implies that decode(p) = decode(q). To see that the reduction is promise preserving, it suffices to note that we only ever map solutions of type (U1) onto solutions of type (O1). Thus, if the original instance has only solutions of type (U1), then the resulting OPDC instance will have solutions of type (O1).

D.1 Proof of Lemma 27
Every solution of the OPDC instance can be mapped back to a solution of the USO instance. We prove this by enumerating all possible types of solution.
1. In solutions of type (O1) we are given a point p ∈ P such that D i (p) = zero for all i. If Ψ(p) = −, then this means that p is a sink, and so it is solution of type (US1). If Ψ(p) = −, then this means that p is a solution of type (USV1).
2. In solutions of type (OV1), we have an i-slice s and two points p, q ∈ P s with p = q such that D j (p) = D j (q) = zero for all j ≤ i. If Ψ(p) = − or Ψ(q) = −, then we have a solution of type (USV1). Otherwise, this means that p and q are both sinks of the face corresponding to s, and specifically this means that they have the same out-map on this face. So this gives us a solution of type (USV2).
3. In solutions of type (OV2) we have an i-slice s and two points p, q ∈ P s such that • D j (p) = D j (q) = zero for all j < i, • p i = q i + 1, and • D i (p) = down and D i (q) = up.
Note that in this case we must have Ψ(p) = − and Ψ(q) = −. If we restrict the cube to the face defined by s, note that Ψ(p) j = Ψ(q) j = 0 for all dimensions j = i, and Ψ(p) i = Ψ(q) i = 1.
Hence p and q have the same out-map on the face defined by s, which gives us a solution of type (USV2).
4. Solutions of type (OV3) are impossible for the instance produced by our reduction. In these solutions we have a point p with p i = 0 and D i (p) = down, or a point q with q j = 1 and D j (q) = up. Since our reduction never does this, solutions of type (OV3) cannot occur.
To see that the reduction is promise-preserving, it suffices to note that solutions of type (US1) are only ever mapped onto solutions of type (O1). Thus, if the USO instance has no violations, then the resulting OPDC instance also has no violations. This completes the proof of Lemma 27.

E.1 Slice Restrictions of Contraction Maps
Our algorithms will make heavy use of the concept of a slice restriction of a contraction map, which we describe here. We define the set of fixed coordinates of a slice s ∈ Slice d , fixed(s) = {i ∈ [d] | s i = * } and the set of free coordinates, free(s) = [d] \ fixed(s). Thus, an i-slice is a slice s ∈ Slice d for which free(s) = [i]. We'll say that a slice is a k-dimensional slice if |free(s)| = k.
We can define the slice restriction of a function f : [0, 1] d → [0, 1] d with respect to a slice s ∈ Slice d , denoted f |s , to be the function obtained by fixing the coordinates fixed(s) according to s, and keeping the coordinates of free(s) as arguments. To simplify usage of f |s we'll formally treat f |s as a function with d arguments, where the coordinates in fixed(s) are ignored. Thus, we define f |s : Let free(s) = {i 1 , . . . , i k }. We'll also introduce a variant of f |s when we want to consider the slice restriction as a lower-dimensional function,f |s : We can also define slice restrictions for vectors in the natural way: Finally, we'll usex |s to denote projection of x onto the coordinates in free(s): We now extend the definition of a contraction map to a slice restriction of a function in the obvious way. We say thatf |s is a contraction map with respect to a norm · with Lipschitz constant c if for any x, y ∈ [0, 1] d we have We'll also introduce some notation to remove clutter. We'll define and use this notation when the function f is clear from the context. Note that we'll only use ∆ i (p) when considering the free coordinates of a slice restriction, so that ∆ i (p) doesn't depend on whether the most recent context has us considering f |s or f . Slice restrictions will prove immensely useful through the following observations: Lemma 59. Let f : [0, 1] d → [0, 1] d be a contraction map with respect to · p with Lipschitz constant c ∈ (0, 1). Then for any slice s ∈ Slice d , f |s is also a contraction map with respect to · p with Lipschitz constant c.
Proof. For any two vectors x, y ∈ [0, 1] d we have Since slice restrictions of contraction maps are themselves contraction maps in the sense defined above, they have unique fixpoints, up to the coordinates of the argument which are fixed by the slice and thus ignored. We'll nevertheless refer to the unique fixpoint of a slice restriction of a contraction map, which is the unique point x ∈ [0, 1] d such that f |s (x) =x |s and x = x |s .
Proof. We'll prove this by contradiction. Without loss of generality, assume towards a contradiction that x i ≤ y i and that f (y) i > y i . Then we have which contradicts the fact that f is a contraction map. The lemma follows.

E.2 Approximate Fixpoint Lemmas
In this section we state and prove the key lemmas that will be used both in our reduction of Contraction to EndOfPotentialLine and our algorithms for finding approximate fixpoints. The lemmas reflect the intuition that sufficiently good approximate fixpoints on (k − 1)-dimensional slices can be used to obtain slightly worse approximate fixpoints for k-dimensional slices for a notion of approximation to be formalized shortly. By choosing our approximation guarantees at each level appropriately we can ensure that we end up with a approximate fixpoint for the original function.
To formalize this intuition we define an approximate fixpoint of a given dimension with respect to some ℓ p -norm and a slice. For any fixed ℓ p -norm, i-slice s, and dimension parameter k ≤ i we'll say that a point x ∈ s is a (s, ℓ p , k)-approximate fixpoint if where (ε j ) k j=1 is a constant depending only on the ℓ p norm and the dimension d. To make the dependence on p and d explicit, we'll write ε j (p, d) instead of ε j .
We'll define an ε-approximate fixpoint of a contraction map f w.r.t. an ℓ p -norm to be a point For each different ℓ p norm and dimension d of our contraction map, we will choose a different sequence indicates the direction of the unique fixpoint of that slice. There are two distinct cases to consider for (ε i (p, d)) d i=1 : • For p = 1, we choose ε i (1, d) The key property satisfied by the choices of ε is captured in the following: Lemma 61. For any p ∈ N, and any d ∈ N, k ∈ N with k ≤ d, Proof. There are two cases to consider. For p = 1, The interesting case is where p ≥ 2. Here we observe that We now prove lemmas showing that these choices of (ε i (p, d)) d i=1 ensure that k − 1-dimensional fixpoints can be used to find k-dimensional fixpoints.
F Proofs for Section 4.2: PL-Contraction to OPDC

F.1 Proof of Lemma 30
We'll define the bit-length of an integer n ∈ Z, denoted b(n), by b(n) ⌈log 2 n⌉. We'll extend the definition to the rationals by defining the bit-length for x ∈ Q as the minimum number of bits needed to represent the numberator and denominator of some representation of x as a ratio of integers: b(x) min p,q∈Z x=p/q (b(p) + b(q)) .
We'll extend this notion of bit-length to matrices by defining the bit-length b(M ) of a matrix M ∈ Q m×n by b(M ) max i,j b(M ij ) and to vectors by LinearFIXP circuits and LCPs. The goal of this proof is to find a tuple (k 1 , k 2 , . . . , k d ) such that the lemma holds. We utilize a lemma from [59] which asserts that every LinearFIXP circuit can be transformed in polynomial time into an LCP with bounded bit-length, such that solutions to the LCP capture exactly the fixpoints of the circuit. For any LinearFIXP circuit C : [0, 1] d → [0, 1] d with gates g 1 , . . . , g m and constants ζ 1 , . . . , ζ q , we'll define the size of C by Lemma 66 ( [59]). Let C : [0, 1] d → [0, 1] d be a LinearFIXP circuit. We can produce in polynomial time an LCP defined by an n × n matrix M C and n-dimensional vector q C for some n with d ≤ n ≤ size(C) such that there is a bijection between solutions of the LCP (M C , q C ) and fixpoints of C. Moreover, to obtain a fixpoint x of C from a solution y to the LCP (M C , q C ), we can just set x = (y 1 , . . . , y d ). Furthermore, b(M C ) and b(q C ) are both at most O(n × size(C)).
Crucially the construction interacts nicely with fixing inputs; if C ′ denotes a circuit where one of the inputs of C is fixed to be some number x, we can bound the bit-length of M C ′ and q C ′ in terms of the bit-lengths of M C , q C , and x.
In other words, the bit-length of M C ′ does not depend on x, and is in fact at most the bit-length of M C , and the bit-length of q C ′ is bounded by the worse of the bit-lengths of q C or the sum of the bit-lengths of M C and x plus an additional bit.
Bounding the bit-length of a solution of an LCP. We now prove two technical lemmas about the bit-length of any solution to an LCP. We begin with the following lemma regarding the bit-length of a matrix inverse. Proof. We first note that if an LCP has a solution, then it has a vertex solution. Let (y, w) be such a vertex solution of the LCP and, as in Section 4.3, let α = {i | y i > 0}, and let A = A α be defined according to (3). We have that A is guaranteed to be invertible, and we have that Ax = q, with y i = x i for i ∈ α and y i = 0 for i / ∈ α, so we have b(y) ≤ b(x). Also note that we have b(A) ≤ b(M ), since the entries in columns that take the value of e i have constant bit-length.
We must transform A into an integer matrix in order to apply Lemma 69. Let ℓ denote the least common multiple of the denominators of the entries in A. Note that ℓ ≤ n 2 2 b(A) and hence b(ℓ) ≤ b(A) + 2 log(n). Our matrix equation above can be rewritten as ℓAx = ℓq, where (ℓA) is an integer matrix. Hence we have x = (ℓA) −1 (ℓq).
Each entry of y consists of the sum of n entries of (ℓA) −1 each of which is multiplied by an entry of q, followed at the end with a multiplication by ℓ. We get the following bound on the bit-length of y.
Fixing the grid size. We shall fix the grid size iteratively, starting with k d , and working downwards. At each step, we will have a space that is partially continuous and partially discrete. Specifically, after the iteration in which we fix k i , the dimensions j < i will allow any number in [0, 1], while the dimensions j ≥ i will only allow the points with denominator k j . If I k denotes the set of all rationals with denominator at most k, then we will denote this as Moreover, after fixing k i , we will have that the property required by Lemma 30 holds for all jslices s with j ≥ i. Specifically, that if x is a fixpoint of s according to f , then there exists a p ∈ P (k i , k i+1 , . . . , k d ) that is a fixpoint of s according to D. We'll start by bounding the bit-length of a solution to the PL-Contraction problem computed one coordinate at a time. Given a PL-circuit C, we use Lemma 66 to produce an LCP defined by M ∈ Q n×n and q ∈ Q n . Now let x 1 , . . . , x n be formal variables representing the inputs to our circuit C. We want to determine parameters κ 1 , . . . , κ d such that such that if we fix variables x i+1 , ..., x d to values with bit-lengths b(x i+1 ), . . . , b(x d ) where b(x j ) ≤ κ j for each j ∈ {i + 1, . . . , d}, then any fixpoints of C with respect to the free variables x 1 , . . . , x i will have bit-lengths b( We'll set κ i (d − i + 1)((5n + 2) log n + n + (4n + 2)b(M ) + 1) + b(q).
To prove that these bounds suffice, we'll use induction on i, starting from i = d. First, we observe that by Lemma 70, we have b(y) ≤ (5n + 2) log n + (4n + 1)b(M ) + b(q) = κ d − 1 ≤ κ d for any solution y to the LCP (M, q). Moreover each fixpoint x of C corresponds to a solution y to the LCP by Lemma 66, so we have b(x 1 ), . . . , b(x d ) ≤ κ d which implies the weaker claim that all fixpoints can be found by choosing Now we'll handle the inductive case. For i = 1, . . . , d, let M (i) , q (i) be the pair defining the LCP after x i+1 through x d are fixed to values with bit-lengths bounded by κ i+1 , . . . , κ d , respectively. This pair will of course depend on the values x i+1 , . . . , x d , but since the bit-length of M (i) and q (i) depend only on the bit-lengths of the fixed values, we can ignore the values of x i+1 , . . . , x d as long as we have bounds on their bit-lengths and as long as we restrict our attention to the bit-lengths of M (i) and q (i) and not the values themselves.
Using Lemma 70 we know that the solution to the LCP has b(x i ) ≤ (5n + 2) log n + (4n + 1)b(M (i) ) + b(q (i) ). Moreover, we obtained M (i) by repeatedly fixing inputs to C, so the repeated application of Observation 67 implies that b( By a simple argument, we can also show that κ i+1 ≥ b(q (i+1) ) so that the above bound can be at which point we can use our observation and the bound b( We conclude that all solutions to the LCP (M (i) , q (i) ) have free coordinates with bit-length at most κ i , and similarly for the fixpoints of C. Thus, every fixpoint of C with respect to the free variables can be found by choosing x 1 , . . . , x i to have bit-lengths at most κ 1 , . . . , κ i , respectively, when the bit-lengths of x i+1 , . . . , x d are chosen to have bit-lengths at most κ i+1 , . . . , κ d .
To conclude the proof of Lemma 30, we need to choose k 1 , . . . , k d so that every i-slice with fixed coordinates in P (k 1 , . . . , k d ) has a fixpoint also on P (k 1 , . . . , k d ) when we map points x ∈ P (k 1 , . . . , k d ) to [0, 1] d by We set k i 2 κ i . Now a point x ∈ P (k 1 , . . . , k d ) corresponds to a point y ∈ [0, 1] d with b(y i ) ≤ 2κ i for all i ∈ [d] and any i-slice where the fixed coordinates satisfy the bit-length bounds will have fixpoints for the remaining variables with all coordinates satisfying the bit-length bounds.
Finally, we observe that for each i ∈ [d], b(k i ) = κ i ≤ κ 1 = O(poly(size(C))), so each of the k i can be represented using polynomially many bits in the size of the circuit C.

F.2 Proof of Lemma 31
The proof of this lemma will make use of Lemmas 59 and 60, which are proved in Appendix E.1.
The statement of the proof says that we can assume that f is contracting with respect to some ℓ p norm, and that we have an i-slice s and two points p, q in s satisfying the following conditions.
• D j (p) = D j (q) = zero for all j < i, To translate p and q from the grid to the [0, 1] n space, we must divide each component by the grid length in that dimension. Specifically, we define the point a ∈ [0, 1] n so that a i = p i /k i for all i, and the point b ∈ [0, 1] n such that b i = q i /k i for all i.
Lemma 59 states that if f is contracting with respect to an ℓ p norm, then the restriction of f to the slice s is also contracting in that ℓ p norm. Hence, f must have a fixpoint in the slice s. Let x ∈ [0, 1] n denote this fixpoint.
By definition we have that (f (x) − x) j = 0 for all j ≤ i, and we also have (f (a) − a) j = 0 for all j < i from the fact that p is a fixpoint of its i-slice. So we can apply Lemma 60 to x and a, and from this we get that ( By the same reasoning we can apply Lemma 60 to x and b, and this gives us that ( Hence we have shown that b i < x i < a i , and so the point x satisfies the conditions of the lemma. This completes the proof of Lemma 31. • Violations of type (OV2) map directly to violations of type (CMV3), as discussed in the main body.
• Violations of type (OV3) give us a point p such that p i = 0 and D i (p) = down, or p i = k i and D i (p) = up. In both cases this means that f (p) ∈ [0, 1] d , and so we have a violation of type (CMV2).
To see that the reduction is promise-preserving, it suffices to note that solutions of type (CM1) are only ever mapped on to solutions of type (O1). Hence, if the input problem is a contraction map, the resulting OPDC instance only has solutions of type (CM1).

G.1 Background on Lemke's algorithm
The explanation of Lemke's algorithm in this section is taken from [32]. Recall the LCP problem from Definition 35, where given a d × d matrix M and d-dimensional vector q, we want to find y satisfying (1). That is (w is a place-holder variable), The problem is interesting only when q ≥ 0, since otherwise y = 0 is a trivial solution. Let Q be the polyhedron in 2d dimensional space defined by the first three conditions; we will assume that Q is non-degenerate (just for simplicity of exposition; this will not matter for our reduction). Under this condition, any solution to (1) will be a vertex of Q, since it must satisfy 2d equalities. Note that the set of solutions may be disconnected. The ingenious idea of Lemke was to introduce a new variable and consider the system: The next lemma follows by construction of (18).
Let P be the polyhedron in 2d + 1 dimensional space defined by the first four conditions of (18), i.e., for now, we will assume that P is non-degenerate.
Since any solution to (18) must still satisfy 2d equalities in P, the set of solutions, say S, will be a subset of the one-skeleton of P, i.e., it will consist of edges and vertices of P. Any solution to the original system (1) must satisfy the additional condition z = 0 and hence will be a vertex of P. Now S turns out to have some nice properties. Any point of S is fully labeled in the sense that for each i, y i = 0 or w i = 0. We will say that a point of S has duplicate label i if y i = 0 and w i = 0 are both satisfied at this point. Clearly, such a point will be a vertex of P and it will have only one duplicate label. Since there are exactly two ways of relaxing this duplicate label, this vertex must have exactly two edges of S incident at it. Clearly, a solution to the original system (i.e., satisfying z = 0) will be a vertex of P that does not have a duplicate label. On relaxing z = 0, we get the unique edge of S incident at this vertex.
As a result of these observations, we can conclude that every vertex of S with z > 0 has degree two within S, while a vertex with z = 0 has degree one. Thus, S consists of paths and cycles.
Of these paths, Lemke's algorithm explores a special one. An unbounded edge of S such that the vertex of P it is incident on has z > 0 is called a ray. Among the rays, one is special -the one on which y = 0. This is called the primary ray and the rest are called secondary rays. Now Lemke's algorithm explores, via pivoting, the path starting with the primary ray. This path must end either in a vertex satisfying z = 0, i.e., a solution to the original system, or a secondary ray. In the latter case, the algorithm is unsuccessful in finding a solution to the original system; in particular, the original system may not have a solution. We give the full pseudo-code for Lemke's algorithm in Table 1.

G.2 Reduction from P-LCP with (PV1) violations to EndOfPotentialLine
It is well known that if matrix M is a P-matrix (P-LCP), then z strictly decreases on the path traced by Lemke's algorithm [14]. Furthermore, by a result of Todd [76,Section 5], paths traced by complementary pivot rule can be locally oriented. Based on these two facts, we derive a polynomial-time reduction from P-LCP to EndOfPotentialLine first, and then from P-LCP to UniqueEOPL.
Let I = (M, q) be a given P-LCP instance, and let L be the length of the bit representation of M and q. We will reduce I to an EndOfPotentialLine instance E in time poly(L). According to Definition 9, the instance E is defined by its vertex set vert, and procedures S (successor), P (predecessor) and V (potential). Next we define each of these.
As discussed in Section G.1 the linear constraints of (18) on which Lemke's algorithm operates forms a polyhedron P given in (19). We assume that P is non-degenerate. This is without loss of generality since, a typical way to ensure this is by perturbing q so that configurations of solution vertices remain unchanged [14], and since M is unchanged if I was a P-LCP instance then it remains so.
Lemke's algorithm traces a path on feasible points of (18) which is on 1-skeleton of P starting at (y 0 , w 0 , z 0 ), where: We want to capture vertex solutions of (18) as vertices in EndOfPotentialLine instance E. To differentiate we will sometimes call the latter, configurations. Vertex solutions of (18) are exactly the vertices of polyhedron P with either y i = 0 or w i = 0 for each i ∈ [d]. Vertices of (18) with z = 0 are our final solutions (Lemma 71). While each of its non-solution vertex has a duplicate label. Thus, a vertex of this path can be uniquely identified by which of y i = 0 and w i = 0 hold for each i and its duplicate label. This gives us a representation for vertices in the EndOfPotentialLine instance E.
∀i ∈ [d], u i = 0 ⇒ y i = 0, and u i = 1 ⇒ w i = 0 A valid setting of the second set of d bits, namely u d+1 through u 2d , will have at most one non-zero bit -if none is one then z = 0, otherwise the location of one bit indicates the duplicate label. Thus, there are many invalid configurations, namely those with more than one non-zero bit in the second set of d bits. These are dummies that we will handle separately, and we define a procedure IsValid to identify non-dummy vertices in Table 2. To go between "valid" vertices of E and corresponding vertices of the Lemke polytope P of LCP I, we define procedures EtoI and ItoE in Table 3.
By construction of IsValid, EtoI and ItoE, the next lemma follows.
Proof. The only thing that can go wrong is that the matrix A generated in IsValid and EtoI procedures are singular, or the set of double labels DL generated in ItoE has more than one elements. If IsValid(u) = 0 then Return u If u = 0 n then Return ItoE(y 0 , w 0 , z 0 ) x = (y, w, z) ← EtoI(u) If z = 0 then x 1 ← vertex obtained by relaxing z = 0 at x in P. If Todd [76] prescribes edge from x to x 1 then set x ′ ← x 1 . Else Return u Else set l ← duplicate label at x x 1 ← vertex obtained by relaxing y l = 0 at x in P x 2 ← vertex obtained by relaxing w l = 0 at x in P If Todd [76] prescribes edge from x to x 1 then If z > z ′ then Return ItoE(x ′ ). Else Return u. Table 5: If IsValid(u) = 0 then Return 0 If u = 0 n then Return 0 (y, w, z) ← EtoI(u) The main idea behind procedures S and P , given in Tables 4 and 6 respectively, is the following (also see Figure 5): Make dummy configurations in vert to point to themselves with cycles of length one, so that they can never be solutions or violations. The starting vertex 0 n ∈ vert points to the configuration that corresponds to the first vertex of the Lemke path, namely u 0 = ItoE(y 0 , w 0 , z 0 ). Precisely, S(0 n ) = u 0 , P (u 0 ) = 0 n and P (0 n ) = 0 n (start of a path).
For the remaining cases, let u ∈ vert have corresponding representation x = (y, w, z) ∈ P, and suppose x has a duplicate label. As one traverses a Lemke path for a P-LCP, the value of z monotonically decreases []. So, for S(u) we compute the adjacent vertex x ′ = (y ′ , w ′ , z ′ ) of x on Lemke path such that the edge goes from x to x ′ , and if the z ′ < z, as expected, then we point S(u) to configuration of x ′ namely ItoE(x ′ ). Otherwise, we let S(u) = u. Similarly, for P (u), we find x ′ such that edge is from x ′ to x, and then we let P (u) be ItoE(x ′ ) if z ′ > z as expected, otherwise P (u) = u.
For the case when x does not have a duplicate label, then we have z = 0. This is handled separately since such a vertex has exactly one incident edge on the Lemke path, namely the one obtained by relaxing z = 0. According to the direction of this edge, we do similar process as before. For example, if the edge goes from x to x ′ , then, if z ′ < z, we set S(u) = ItoE(x ′ ) else S(u) = u, and we always set P (u) = u. In case the edge goes from x ′ to x, we always set S(u) = u, and we set P (u) depending on whether or not z ′ > z.
The potential function V , formally defined in Table 5, gives a value of zero to dummy vertices and the starting vertex 0 n . To all other vertices, essentially it is ((z 0 − z) * ∆ 2 ) + 1. Since value of z starts at z 0 and keeps decreasing on the Lemke path this value will keep increasing starting from zero at the starting vertex 0 n . Multiplication by ∆ 2 will ensure that if z 1 > z 2 then the corresponding potential values will differ by at least one. This is because, since z 1 and z 2 are coordinates of two vertices of polytope P, their maximum value is ∆ and their denominator is also bounded above by ∆. Hence z 1 − z 2 ≤ 1/∆ 2 (Lemma 74).
To show correctness of the reduction we need to show two things: (i) All the procedures are well-defined and polynomial time. (ii) We can construct a solution of I from a solution of E in polynomial time. If IsValid(u) = 0 then Return u If u = 0 n then Return u (y, w, z) ← EtoI(u) If (y, w, z) = (y 0 , w 0 , z 0 ) then Return 0 n If z = 0 then x 1 ← vertex obtained by relaxing z = 0 at x in P. If Todd [76] prescribes edge from x 1 to x then set x ′ ← x 1 . Else Return u Else l ← duplicate label at x x 1 ← vertex obtained by relaxing y l = 0 at x in P x 2 ← vertex obtained by relaxing w l = 0 at x in P If Todd [76] prescribes edge from x 1 to x then Lemma 73. Functions P , S and V of instance E are well defined, making E a valid EndOfPo-tentialLine instance.
Proof. Since all three procedures are polynomial-time in L, they can be defined by poly(L)-sized Boolean circuits. Furthermore, for any u ∈ vert, we have that S(u), is an integer that is at most 2 · ∆ 3 and hence is in set {0, . . . , 2 m − 1}.
There are two possible types of solutions of an EndOfPotentialLine instance (see Definition 9). One indicates the beginning or end of a line (R1), and the other is a vertex with locally optimal potential (R2). First we show that (R2) never arises. For this, we need the next lemma, which shows that potential differences in two adjacent configurations adheres to differences in the value of z at corresponding vertices.
Lemma 74. Let u = u ′ be two valid configurations, i.e., IsValid(u) = IsValid(u ′ ) = 1, and let (y, w, z) and (y ′ , w ′ , z ′ ) be the corresponding vertices in P. Then the following holds: Proof. Among the valid configurations all except 0 has positive V value. Therefore, wlog let u, u ′ = 0. For these we have Note that since both z and z ′ are coordinates of vertices of P, whose description has highest coefficient of max{max i,j∈[d] M (i, j), max i∈[d] |q i |}, and therefore their numerator and denominator both are bounded above by ∆. Therefore, if z < z ′ then we have For (i), if z = z ′ then clearly V (u) = V (u ′ ), and from the above argument it also follows that if V (u) = V (u ′ ) then it can not be the case that z = z ′ . Similarly for (ii), if V (u) > V (u ′ ) then clearly, z ′ > z, and from the above argument it follows that if z ′ > z then it can not be the case that V (u ′ ) ≥ V (u).
Using the above lemma, we will next show that instance E has no local maximizer.
Proof. Let x = (y, w, z) and x ′ = (y ′ , w ′ , z ′ ) be the vertices in polyhedron P corresponding to u and v respectively. From the construction of v = S(u) implies that z ′ < z. Therefore, using Lemma 74 it follows that V (v) < V (u).
Due to Lemma 75 the only type of solutions available in E is (R1) where S(P (u)) = u and P (S(u)) = u. Next two lemmas shows how to construct solution of P-LCP instance I or a (PV1) type violation (non-positive principle minor of matrix M ) from these.
Proof. By construction, if IsValid(u) = 0, then S(P (u)) = u and P (S(u)) = u, therefore IsValid(u) = 0 when u has a predecessor or successor different from u. Given this, from Lemma 72 we know that (y, w, z) is a feasible vertex in (18). Therefore, if z = 0, then by Lemma 71 we have a solution of the LCP (1), i.e., a type (Q1) solution of our P-LCP instance I = (M , q).
Lemma 77. Let u ∈ vert, u = 0 n such that P (S(u)) = u or S(P (u)) = u, and let x = (y, w, z) = EtoI(u). If z = 0 then x has a duplicate label, say l. And for directions σ 1 and σ 2 obtained by relaxing y l = 0 and w l = 0 respectively at x, we have σ 1 (z) · σ 2 (z) ≥ 0, where σ i (z) is the coordinate corresponding to z.
Proof. From Lemma 76 we know that IsValid(u) = 1, and therefore from Lemma 72, x is a feasible vertex in (18). From the last line of Tables 4 and 6 observe that S(u) points to the configuration of vertex next to x on Lemke's path only if it has lower z value otherwise it gives back u, and similarly P (u) points to the previous only if value of z increases.
First consider the case when P (S(u)) = u. Let v = S(u) and corresponding vertex in P be (y ′ , w ′ , z ′ ) = EtoI(v). If v = u, then from the above observation we know that z ′ > z, and in that case again by construction of P we will have P (v) = u, contradicting P (S(u)) = u. Therefore, it must be the case that v = u. Since z = 0 this happens only when the next vertex on Lemke path after x has higher value of z (by above observation). As a consequence of v = u, we also have P (u) = u. By construction of P this implies for (y ′′ , w ′′ , z ′′ ) = EtoI(P (u)), z ′′ > z. Putting both together we get increase in z when we relax y l = 0 as well as when we relax w l = 0 at x.
For the second case S(P (u)) = u similar argument gives that value of z decreases when we relax y l = 0 as well as when we relax w l = 0 at x. The proof follows.
Finally, we are ready to prove our main result of this section using Lemmas 75, 76 and 77. Together with Lemma 77, we will use the fact that on Lemke path z monotonically decreases if M is a P-matrix or else we get a (PV1) type witness that M is not a P-matrix [14].
Theorem 78. There is a polynomial-time promise-preserving reduction from P-LCP with (PV1) violations to EndOfPotentialLine. u = 0 such that either S(P (u)) = u or P (S(u)) = u, then by Lemma 76 it is valid configuration and has a corresponding vertex x = (y, w, z) in P. Again by Lemma 76 if z = 0 then y is a (Q1) type solution of our P-LCP instance I. On the other hand, if z > 0 then from Lemma 77 we get that on both the two adjacent edges to x on Lemke path the value of z either increases or deceases. This gives us a minor of M which is non-positive [14], i.e., a (Q2) type solution of the P-LCP instance I with (PV1) violation.
The reduction is promise preserving because if the LCP instance is promised to be P-LCP then z monotonically decreases along the Lemke's path, and all feasible complementary vertices are on this path. Therefore, the corresponding EndOfPotentialLine instance will have exactly one path ending in a solution where the corresponding vertex x = (y, w, z) of the LCP has z = 0 mapping to the P-LCP solution.
UniqueEOPL has four types of solutions. Out of these, (UV1) is ruled out by Lemma 75. Next we show that any "extra" end of lines as well as (UV3) type solutions map to a (PV2) violation.
Lemma 80. Given either of the following, we can construct two distinct solutions of LCP (M, q ′ ) for some q ′ : (a) u ∈ vert is a (U1) or (UV2) type solution of instance E such that corresponding vertex x = (y, w, z) = EtoI(u) has z * > 0.
(b) u, v ∈ vert forms a (UV3) type solution of instance E.
Proof. The common idea to go from (a) or (b) to two solutions of some LCP with matrix M is to create two (or more) solutions of (18) with same z value. Suppose (y * , w * , a) and (y ′ , w ′ , a) with y = y ′ are feasible in (18) for some a ∈ R, then clearly for q ′ = q + a, (y * , w * ) and (y ′ , w ′ ) are solutions of LCP (1) with matrix M and vector q ′ . For (a), let x * = (y * , w * , z * ) = EtoI(u) with z * > 0, and let l be the duplicate label at vertex x * in (Q1). Then from Lemma 77 we know that for directions σ 1 and σ 2 obtained by relaxing y l = 0 and w l = 0 respectively at x * , we have σ 1 (z) * σ 2 (z) ≥ 0, where σ i (z) is the coordinate corresponding to z. Suppose σ 1 (z), σ 2 (z) < 0, and for i = 1, 2, let z i be the value of z at the vertex adjacent to x * in direction of σ i ; set z i = −∞ if no vertex encountered in direction σ i . Let ǫ > 0 be small enough so that ǫ < (z * − z i ), i = 1, 2, and consider the points x i = (x * + ǫ |σ i (z)| σ i ) on the edge corresponding to σ i adjacent to x * . It is easy to check that by choice of ǫ, both x 1 and x 2 are feasible. We will next show that these are solutions of an LCP defined by (M, q ′ ) for some q ′ .
Note that by construction the z coordinate at both x 1 and x 2 is (z * − ǫ), giving us desired two solutions of (18) with the same z value. Similar, argument holds when σ 1 (z), σ 2 (z) > 0 where the corresponding z value is (z * + ǫ). If either σ 1 (z) or σ 2 (z) is zero, then z remains unchanged on the entire corresponding edge.
For (b), x * = (y * , w * , z * ) = EtoI(u) and x ′ = (y ′ , w ′ , z ′ ) = EtoI(v). If V (u) = V (v), then clearly z * = z ′ (Lemma 74) and we get the desired two points feasible in (18) with the same z value. If V (u) < V (v) < V (S(u)), then there exists a point on the edge joining x * with EtoI(S(x * )) with the same z value as z ′ . Now we are ready to show our main result of P-LCP with PV2 in UniqueEOPL using Lemmas 75, 76, 79 and 80 Theorem 81. There is a polynomial-time promise-preserving reduction from P-LCP with (PV2) violations to UniqueEOPL, and thereby also to EndOfPotentialLine.
Proof. Given an instance of I = (M , q) of P-LCP, where M ∈ R d×d and q ∈ R d×1 reduce it to an instance E of UniqueEOPL as described above with vertex set vert = {0, 1} 2d and procedures S, P and V as given in Table 4, 6, and 5 respectively.
Lemma 75 rules out UV1 violation in E. If we get U1 solution or UV2 violation u of E, then corresponding vertex x = (y, w, z) is feasible in (18) by Lemma 76. Furthermore, if z = 0 then y is a (Q1) type solution of our P-LCP instance I. On the other hand if z > 0, then by Lemmas 80 and 79 we can construct a PV2 violation of our P-LCP instance I. Similarly, Lemmas 80 and 79 also map any UV3 violation of UniqueEOPL instance E to a PV2 violation of I.
By construction, if I is a promise P-LCP instance, then instance E of UniqueEOPL will have exactly one U1 solution corresponding to the unique solution of the I.

H Proofs for Section 5.1: Algorithms for Contraction Maps
In this section, we provide an exact algorithm for solving PL-Contraction, i.e., we either return a rational fixpoint of polynomial bit-length or a pair of points that prove (indirectly) that the given function is not a contraction map. which is guaranteed to have a rational fixpoint of polynomial bit-length or two points that prove (indirectly) that the given function is not contracting. Then we extend this algorithm to find an approximate fixpoint of general contraction maps for which there may not be an exact solution of polynomial bit length. In both cases, the problems solved by our algorithm are not promise problems and we always return either a solution or a violation. Our algorithms work for any ℓ p norm with p ∈ N, and are polynomial for constant dimension d. These are the first such algorithms for p = 2. Such algorithms were so far only known for the ℓ 2 and ℓ ∞ norms [43, 70, 71] 5 H.1 Overview: algorithm to find a fixed-point of PL-Contraction The algorithm does a nested binary search using Lemmas 59 and 60 to find fixpoints of slices with increasing numbers of free coordinates. We illustrate the algorithm in two dimensions in Figure 6.
The algorithm is recursive. To find the eventual fixpoint in d dimensions we fix a single coordinate s 1 , find the unique (d − 1)-dimensional fixpoint of f |s , the (d − 1)-dimensional contraction map obtained by fixing the first coordinate of the input to f to be s 1 . Let x the unique fixpoint of f |s where x 1 = s 1 . If f (x 1 ) > s 1 , then the d-dimensional fixpoint x * of f has x * 1 > s 1 , and if f (x 1 ) < s 1 , then x * 1 < s 1 (Lemma 60). We can thus do a binary search for the value of x * 1 . Once we've found x * 1 , we can recursively find the (d − 1)-dimensional fixpoint of f |s where s 1 = x 1 . The resulting solution will be the d-dimensional fixpoint. At each step in the recursive procedure, we do a binary search for the value of one coordinate of the fixpoint at the slice determined by all the coordinates already fixed. For piecewise-linear functions, we know that all fixpoints are rational with bounded bit-length (as discussed in Section F.1), so we can find each coordinate exactly.
If at any step in recursion our binary search finds two k − 1 dimensional fixpoints on slices that are adjacent, differing only in the kth coordinate and by a small enough amount , we can return these points, which witness the failure of f to be a contraction map. These points correspond to a solution of type (CMV3) to the PL-Contraction problem. The proof that f is not a contraction is indirect, and uses the fact that the discretized grid implicitly searched by the algorithm will contain every fixpoint of f . Since we maintain the invariant that our two pivots bound the coordinate we're searching over from above and below when f is a contraction map, such a pair of points gives proof that f is not contracting.
x * = f (x * ) 1 2 3 4 5 8 Figure 6: An illustration of the algorithm to find a fixpoint of a piecewise-linear contraction map in two dimensions. The algorithm begins by finding a fixpoint along the slice with x 1 = 1/2. The fixpoint along that slice points to the right, so we next find a fixpoint along the slice with x 1 = 3/4. The fixpoint along that slice points to the left, so we find the fixpoint along x 1 = 5/8. We successively find fixpoints of one-dimensional slices, and then use those to do a binary search for the two-dimensional fixpoint. The red regions are the successive regions considered by the binary search, where each successive step in the binary search results in a darker region.
Using this algorithm we obtain the following theorem.
Theorem 82. Given a LinearFIXP circuit C purporting to encode a contraction map f : [0, 1] d → [0, 1] d with respect to any ℓ p norm, there is an algorithm to find a fixpoint of f or return a pair of points witnessing that f is not a contraction map in time that is polynomial in size(C) and exponential in d.
The full details of the algorithm can be found in Appendix H.3.

H.2 Overview: algorithm to find an approximate fixed-point of Contraction
Here we generalize our algorithm to find an approximate fixpoint of an arbitrary function given by an arithmetic circuit, i.e., our algorithm solves Contraction, which is specified by a circuit f that represents the contraction map, 6 a p-norm, and ε. Again, let d denote the dimension of the problem, i.e. the number of inputs (and outputs) of f . Let x * denote the unique exact fixpoint for the contraction map f . We seek an approximate fixpoint, i.e., a point for which f (x) − x p ≤ ε.
We do the same recursive binary search as in the algorithm above, but at each step of the algorithm instead of finding an exact fixpoint, we will only find an approximate fixpoint of f |s . The difficulty in this case will come from the fact that Lemma 60 does not apply to approximate fixpoints. Consider the example illustrated in Figure 7. In this example, y is the unique fixpoint of the slice restriction along the gray dashed line. By Lemma 60, (f (y) 1 − y 1 )(x * 1 − y 1 ) ≥ 0 so if we find y, we can observe that f (y) 1 > y 1 and recurse on the right side of the figure, in the region labeled R. If we try to use the same algorithm but where we only find approximate fixopints at each step, we'll run into trouble. In this case, if we found z instead of y, we would observe that f (z) 1 < z 1 and conclude that x * 1 < z 1 , which is incorrect. As a result, we would limit our search to the region labeled L, and wouldn't be able to find x * .

L R
f (z) f (y) x * = f (x * ) y z s Figure 7: A step in the recursive binary search. Here, x * is the fixpoint for the original function, y is the fixpoint for the slice restriction f |s along the dashed gray line, and z is an approximate fixpoint to the slice restriction.
When looking for an approximate fixpoint, we'll have to choose a different precision ε i for each level of the recursion so that either the point x returned by the ith recursive call to our algorithm satisfies |f (x) i − x i | > ε i and we can rely on it for pivoting in the binary search, or |f (x) i − x i | ≤ ε i and we can return x as an approximate fixpoint to the recursive call one level up. Each different ℓ p norm will require a different choice of (ε i ) d i=1 . Using this idea we are able to obtain the following results:

Algorithm 1 Algorithm for PL-Contraction
1: Input: A k-slice s ∈ Slice d (2 κ ) for some k ≤ d. 2: Output: The unique fixpoint of s, i.e., a point y such that f |s (y) = y |s and y = y |s . 3: function FindFP(s)

17:
Set v ← FindFP(t). 18: if f (v) k = t k then return v. 19: end if 20: if f (v) k > t k then Set t  Set t k ← the unique number in (t

27:
else throw error: "The pair v (ℓ) , v (h) is a solution of type (CMV3)." 28: end if 29: end function The binary search maintains the invariant that if we let v (ℓ) = FindFP(t (ℓ) ) and v (h) = FindFP(t (h) ) we have f (v (ℓ) ) k > v (ℓ) k , and f (v (h) ) k < v (h) k . By Lemma 60, this invariant ensures that t * k satisfies t (ℓ) at all times. Therefore, at some point in the binary search either one of the endpoints is t * k and we return the desired fixpoint or we end the binary search with t   . By the assumption we know that t * k is a rational number with denominator at most 2 κ k . Since there can be at most one such number in (t (ℓ) k , t (h) k ), t * k can be uniquely identified. Let t * denote the slice t after setting t k ← t * k . The second to last line of Algorithm 1 will then return the unique fixpoint of f |t * , which will be the unique fixpoint of f |s is a contraction map. The only way this can fail to happen is if the point returned by FindFP(t * ) is not actually a k-dimensional fixpoint, in which case the algorithm will throw an error.
We now address the case where the algorithm returns an error: Lemma 87. If FindFP(t) returns an error for k-slice s ∈ Slice d (2 κ 1 , 2 κ 2 , . . . , 2 κ d ), the pair of points (v (ℓ) , v (h) ) indicated by the error witness that f is not a contraction map.
Proof. The algorithm only returns an error when t h(ℓ) k − t (ℓ) k ≤ 1 2 κ k −1 and the point v * returned by FindFP(t * ) is not a k-dimensional fixpoint, where t * is the slice obtained by setting t k ← t * in line 24. By induction we know that v * must be a (k − 1)-fixpoint, since the recursive call to the algorithm didn't throw an error. If f is a contraction map, then the fact that v (ℓ) and v (h) are (k − 1)-dimensional fixpoints of f |s with f |s (v (ℓ) ) k − v (ℓ) k < 0 together imply that the kth coordinate of the true fixpoint f |s will lie in (v (ℓ) k , v (h) k ) by Lemma 60. By Lemma 30, we know that any k-dimensional fixpoint of f has kth coordinate with bit-length at most κ k . Thus, there is a unique value which the kth coordinate of the unique fixpoint of f |s can have, namely t * k . But v * is not a fixpoint of f |s and so we must conclude that f |s is not a contraction map, which implies that f is not a contraction map. Thus, the pair (v (ℓ) , v (h) ) together witness that f is not a contraction map.
Using Lemma 87 and applying induction using Lemma 85 as a base-case and Lemma 86 as an inductive step, the next theorem follows.

H.4 Details: finding an approximate fixed-point of Contraction
We now proceed to prove the correctness of our algorithm.

Analysis.
We will show that for any contraction map f with respect to an ℓ p norm, and any ε > 0, if Algorithm 2 doesn't throw an error, then it returns an approximate fixpoint of f , i.e. a point v ∈ [0, 1] d such that f (v) − v ≤ ε. To do this, we'll show that for any k < d and k-slice s ∈ Slice d ApproxFindFP(s) will return a (s, ℓ p , k)-approximate fixpoint (when it doesn't throw an error). Since Algorithm 2 is recursive, our proof will be by induction. The next lemma establishes the base case of the induction and follows by design of the algorithm.
For the inductive step, we show that we can go from approximate fixpoints of (k − 1)-slices to approximate fixpoints of k-slices.
Proof. We observe that f |s is a contraction map by Lemma 59. We assume that v = ApproxFindFP(t) is a (t, ℓ p , k − 1)-approximate fixpoint of f |t for any value of t k ∈ [0, 1].
We first observe that after the first recursive invocations of ApproxFindFP, as ApproxFindFP(t (ℓ) ) and ≤ ε k (p, d), we return v (h) or v (ℓ) , respectively, so the output of ApproxFindFP(s) satisfies the requirements of the lemma.