Tropical Effective Primary and Dual Nullstellens\"atze

Tropical algebra is an emerging field with a number of applications in various areas of mathematics. In many of these applications appeal to tropical polynomials allows to study properties of mathematical objects such as algebraic varieties and algebraic curves from the computational point of view. This makes it important to study both mathematical and computational aspects of tropical polynomials. In this paper we prove tropical Nullstellensatz and moreover we show effective formulation of this theorem. Nullstellensatz is a next natural step in building algebraic theory of tropical polynomials and effective version is relevant for computational aspects of this field. On our way we establish a simple formulation of min-plus and tropical linear dualities. We also observe a close connection between tropical and min-plus polynomial systems.


Introduction
A min-plus or tropical semiring is defined by the set K, which can be R, R ∞ = R ∪ {+∞}, Q or Q ∞ = Q ∪ {+∞} endowed with two operations tropical addition ⊕ and tropical multiplication ⊙ defined in the following way: x ⊕ y = min{x, y}, x ⊙ y = x + y.
Tropical polynomials are a natural analog of classical polynomials. In classical terms it can be expressed in the form f ( x) = min i M i ( x), where each M i ( x) is a linear polynomial (a tropical monomial) in variables x = (x 1 , . . . , x n ), and all coefficients of all M i are nonnegative integers except a free coefficient which can be any element of K.
The degree of a tropical monomial M is the sum of its coefficients (except the free coefficient) and the degree of a tropical polynomial f denoted by deg(f ) is the maximal degree of its monomials. A point a ∈ K n is a root of the polynomial f if the minimum min i {M i ( a)} is either attained on at least two different monomials M i or is infinite. We defer a more detailed definitions on the basics of min-plus algebra to Preliminaries.
Tropical polynomials have appeared in various areas of mathematics and found many applications (see, for example, [14,21,25,22,23,13]). One of the most important advantage of tropical algebra is that it makes some properties of classical mathematical objects computationally accessible [27,14,21,25]. One of the main goals of min-plus mathematics is to build a theory of tropical polynomials which would help to work with them and would possibly lead to new results in the related areas. Computational reasons, on the other hand, make it important to keep the theory maximally computationally efficient.
The best studied so far is the case of linear tropical polynomials and systems of linear tropical polynomials. For them the analog of the large part of the theory of classical linear polynomials was established. This includes studies of tropical analogs of the rank of a matrix and the independence of vectors [5,16,1], the analog of the determinant of a matrix and its properties [23], the analog of Gauss triangular form [9]. Also the solvability problem for tropical linear systems was studied from the complexity point of view. Interestingly, it turned out to be polynomially equivalent to a well known mean payoff games problem [10].
For tropical polynomials of arbitrary degree less is known. In [24] the radical of a tropical ideal was explicitly described. In [27] it was shown that solvability problem for tropical polynomial systems is NP-complete.
Along with tropical polynomials there were also studied min-plus polynomials. Min-plus polynomial is an expression of the form min i M i ( x) = min j L j ( x), where M i and L j are tropical monomials. A point a ∈ K n is a root of the polynomial if min i M i ( a) = min j L j ( a).
Min-plus polynomials were studied mainly for its connections to dynamic programming (see [4,17]). As in the case of tropical polynomials here the best studied case is the case of linear min-plus polynomials [4]. Also in [10] the connection of min-plus and tropical linear polynomials was established.
As for the min-plus polynomials of arbitrary degree much less is known. We are only aware of the result on the computational complexity of the system of min-plus polynomials: paper [11] shows that this problem is NPcomplete.
Our results. The next natural step in developing of the theory of tropical polynomials would be an analog of classical Nullstellensatz, the theorem which for the classical polynomials constitutes one of the cornerstones of algebraic geometry. Concerning the tropical Nullstellensatz, the problem was already addressed in the paper [8]. In this paper there was established a general idea to approach this theorem in the tropical case through the dual formulation. Moreover, in [8] there was formulated a conjecture (which we restate below as Conjecture 3) capturing the formulation of the tropical dual Nullstellensatz and this conjecture was proven for the case of polynomials of 1 variable. Previously in [26] tropical dual Nullstellensatz was established for a pair of polynomials (k = 2) in 1 variable relying on the classical resultant and on the Kapranov's theorem [6,26].
More specifically, in [8] there was considered a Cayley matrix of the system of tropical polynomials F = {f 1 , . . . , f k }. This matrix can be easily constructed from F : we just consider all polynomials f i + M j (in classical notation) of degree at most N , where N is a parameter and M j is a tropical monomial. We put the coefficients of these polynomials in the rows of the matrix, where columns of the matrix correspond to monomials. Empty entries of the matrix we fill with ∞. The resulting matrix we denote by C N . In [8] it was conjectured that the system of polynomials F has a solution iff the tropical linear system with the matrix C N has a solution, and moreover N can be bounded by some function on n, k and the degree of polynomials in F (this refers to effectiveness).
In this paper we prove this conjecture. Moreover, we show an effective version of the theorem. That is, we pose bounds on N and provide examples showing that they are close to tight. These bounds is relevant for computational aspects of tropical polynomial systems. Surprisingly, it turns out that the cases of tropical semiring with and without ∞ differ dramatically. More specifically, in the case of tropical semirings K = R and K = Q we show that F has a solution iff the tropical linear system with the matrix C N has a solution, where N = (n + 2) · k · d, d is the maximal degree of polynomials in F , k is the number of polynomials in F and n is the number of variables. For the case of tropical semirings K = R ∞ and K = Q ∞ we show a similar result, but with N = (Cd) min(n,k) for some constant C. Thus for the case without ∞ the bound on N is polynomial in n, k, d and for the case with ∞ the bound on N is still polynomial in d, but is exponential in n and k. We give examples showing that our bounds on N are qualitatively optimal, that is the difference of the values of N in these cases is not an artifact of the proof, but is unavoidable. However, quantitatively there is a gap between upper and lower bounds, see Section 3 for details.
Regarding the substantial gap between the required degree in the finite and infinite cases we observe there is a similar situation for classical Nullstellensatz. Indeed, we show that in case of semiring R the bound in a tropical effective Nullstellensatz depends on the sum of the degrees of the polynomials, while in case of larger semiring R ∞ the bound depends on the product of the degrees (Theorems 4 and 10). We recall that for systems of classical polynomials over an algebraically closed field the bound on the effective Nullstellensatz depends on the sum of the degrees of polynomials in homogeneous (projective) case [19,20] while the bound depends on the product of the degrees for arbitrary polynomials (affine case) [7,18].
As a consequence of tropical dual Nullstellensatz we obtain its infinite version. Namely, system of tropical polynomials has a solution iff an infinite tropical linear system with an infinite Cayley matrix C (that is, with no bound on the degree) has a solution. Note that the latter system makes sense because each row of C contains just a finite number of finite entries. This infinite version was conjectured in [8], where it was also observed that a similar infinite version of the classical Nullstellensatz holds.
Next we show the primary version of tropical Nullstellensatz. We view Nullstellensatz as a duality 1 result for systems of polynomials: if there is no solution to the system of polynomials then some positive property holds (something does exist). In the classical case this positive property is the containment of 1 in the ideal generated by polynomials (over algebraically closed field). The naive analog does not hold for the tropical case. Indeed, for example, the tropical system {min(x, 0), min(x, 1)} has no solutions but the ideal generated by this does not contain polynomials with only one monomial (only these polynomials have ho finite solutions) and in particular there is no polynomial 0. Basically, the point is that in the tropical semiring there is no subtraction, so in any algebraic combination of polynomials no monomials cancel out. To overcome this difficulty we introduce the notion of nonsingular tropical algebraic combination of tropical polynomials (see the definition in Preliminaries; here we only note that the property is simple and straightforward to check). For the primary tropical Nullstellensatz we show that there is no solution to tropical linear system F iff there is a nonsingular tropical algebraic combination of polynomials in F of degree at most N . We show this result for both cases of tropical semiring with and without ∞ and the value N in both cases correspond to the size of Cayley matrix in the tropical dual Nullstellensatz.
To establish primary Nullstellensatz we need a duality for tropical linear systems. We show this duality result as a sidestep. However we note that this results is heavily based on already known results [2] and should be considered more as an observation. 1 To avoid a confusion we note that the word 'dual' is used in two different meanings. First, we use it in the term "dual Nullstellensatz" as opposed to standard version of Nullstellensatz. This means that dual Nullstellensatz is obtained from standard Nullstellensatz by (linear) duality. Second, we use the word 'dual' in term "duality result" to denote the general type of results. Since standard Nullstellensatz is a duality result itself, applying linear duality to it results in a non-duality result. Thus, dual Nullstellensatz is not a duality result.
We also prove similar results for the case of min-plus polynomials. As a sidestep of our analysis we show the close connection between tropical and min-plus systems of polynomials. We argue that these two models are very closely connected and that this connection can be used to establish new results in tropical algebra. The observation is that some results (like linear duality) are easier to obtain for min-plus polynomials and then translate to tropical polynomials, and some other results (like Nullstellensatz) on the other hand are easier to obtain for tropical polynomials and then translate to min-plus polynomials. In our opinion it is fruitful for further development of the theory to consider both models simultaneously.
Our techniques We use the general approach of the paper [8] to Nullstellensatz through dual formulation.
To establish the dual Nullstellensatz we use methods of discrete geometry dealing with integer polyhedra. First we obtain dual Nullstellensatz for the case without ∞. The case with ∞ requires much more additional work.
To obtain primary Nullstellensatz we apply the duality results for linear tropical polynomials. We note that these results rely on the completely different combinatorial techniques, namely on the connection to mean payoff games [2].
Other works on tropical Nullstellensatz In papers [15] there was established Nullstellensatz for tropical semiring augmented with additional elements (called ghosts). This result is in the line with other results [25] trying to capture tropical mathematics by the means of the classical ones. However, tropical semiring augmented with ghosts constitutes (logically) a completely different model compared to usual tropical semiring. Thus our results are incomparable with the one of the paper [15].
We also note that the paper [24] (which has Nullstellensatz in the title) takes completely different view on Nullstellensatz. We consider Nullstellensatz as a result on the solvability of system of polynomials, and paper [24] views Nullstellensatz as a result on the structure of the radical of a tropical ideal. As it can be easily seen, for example, from our results during the translation from classical world to the tropical one, the connection between these two objects changes drastically (cf. with example F = {min(x, 0), min(x, 1)} above). Thus our results are incomparable with the results of [24] as well.
The rest of the paper is organized as follows. In Section 2 we introduce main definitions. In Section 3 we state our results. In Section 4 we show tropical dual Nullstellensatz. In Section 5 we establish the connection between tropical and min-plus polynomial systems. In Section 6 we show min-plus dual Nullstellensatz. In Section 7 we show tropical and min-plus primary Nullstellensätze. In Section 8 we show min-plus and tropical linear dualities. Sections 5 and 8 can be read independently.

Min-plus algebra
Tropical and min-plus polynomials. A min-plus or tropical semiring is defined by the set K, which can be R, R ∞ = R∪{+∞}, Q or Q ∞ = Q∪{+∞} endowed with two operations, tropical addition ⊕ and tropical multiplication ⊙ defined in the following way: x ⊕ y = min{x, y}, x ⊙ y = x + y.
Below we mainly consider K = R and K = R ∞ . The proofs however literally translate to the cases of Q and Q ∞ . The tropical (or min-plus) monomial in variables x 1 , . . . , x n is defined as where c is an element of the semiring K and i 1 , . . . , i n are nonnegative integers. In usual notation the monomial is The degree of the monomial is defined as the sum i 1 + . . . + i n . We denote x = (x 1 , . . . , x n ) and for I = (i 1 , . . . , i n ) we introduce the notation A tropical polynomial is the tropical sum of tropical monomials or in usual notation f = min i M i . The degree of the tropical polynomial f denoted by deg(f ) is the maximal degree of its monomials. A point a ∈ K n is a root of the polynomial f if the minimum min i {M i ( a)} is either attained on at least two different monomials M i or is infinite.
A min-plus polynomial is an expression of the form where M i , L j are min-plus monomials. The degree of min-plus polynomial is the maximal degree among monomials M i and L j over all i, j. A point a ∈ K n is a root of this polynomial if the equality holds for x = a.
Linear polynomials. An important special case of tropical and min-plus polynomials are linear polynomials. They can be defined as general tropical polynomials of degree 1. However, it is convenient to denote by a linear polynomial an expression of the form That is we assume that all variables are presented exactly once. The tropical linear system min can be naturally associated with its matrix A ∈ K m×n . We will also use a matrix notation A ⊙ x for such system. Analogously min-plus linear systems can be associated with a pair of matrices A and B corresponding to the lefthand side and the right-hand side of an equation. We will also write min-plus linear system in a matrix form as A ⊙ x = B ⊙ x. It will be also convenient to consider min-plus linear systems of (componentwise) inequalities A ⊙ x B ⊙ x. It is not hard to see that their expressive power is the same as of equations.
Lemma 1. For any min-plus system of linear equations there is an equivalent system of min-plus linear inequalities and visa versa.
Proof. Indeed, each min-plus linear equation L 1 ( x) = L 2 ( x) is equivalent to the pair of min-plus inequalities L 1 ( x) L 2 ( x) and L 1 ( x) L 2 ( x). On the other hand min-plus linear inequality L 1 ( x) L 2 ( x) is equivalent to the min-plus equation L 1 ( x) = min(L 1 ( x), L 2 ( x)). It is not hard to see that the last equation can be transformed to the form of min-plus linear equation.
There is one more important convention we make concerning the case of tropical semiring with infinity. For two matrices A, B ∈ R n×m ∞ we say that the system A ⊙ x < B ⊙ x has a solution if there is x ∈ R m ∞ such that for each row of the system if one of sides is finite, then strict inequality holds, but also the case where both sides are equal to ∞ is allowed (informally, we can say that ∞ < ∞).
We also consider non-homogeneous tropical linear systems This system can be naturally associated to the matrix A ∈ K m×(n+1) and written in the matrix form as A ⊙ ( x, 0). Analogously, we can consider non-homogeneous min-plus linear systems . Indeed, we can add the same number to all coordinates of the solution of the latter system to make x n+1 = 0. The same is true for min-plus case. But the same is not true over R ∞ : homogeneous system always has a solution (just let x = (∞, . . . , ∞)), but non-homogeneous system does not always have a solution.
3 Results Statement

Tropical and Min-plus Nullstellensatz
Definition 2. For a given system of tropical polynomials F = {f 1 , . . . , f k } in n variables we introduce its infinite Cayley matrix C. The columns of C correspond to nonnegative integer vectors I ∈ Z n + and the rows of C correspond to the pairs (j, J), where 1 j k and J ∈ Z n + . For given I and (j, J) we let the entry c (j,J),I be equal to the coefficient of the monomial x I in the polynomial x J ⊙ f j (if there is no such monomial in the polynomial we assume that the entry is equal to +∞). By C N we denote the finite submatrix of the matrix C consisting of the columns I such that i 1 +. . .+i n N and the rows which have all their finite entries in these columns. The tropical linear system associated with C N will be of interest to us. Over R ∞ we consider non-homogeneous system with the matrix C N . The column corresponding to constant monomial is a non-homogeneous column.
For the system of min-plus polynomials F = {f 1 = g 1 , .
. . , f k = g k } we analogously introduce the pair of matrices C and D corresponding to the left-hand sides and the right-hand sides of polynomials respectively. In the same way we introduce matrices C N , D N and the corresponding linear systems C N ⊙ y = D N ⊙ y. Analogously, for the case of R ∞ we consider non-homogeneous systems.
In the paper [8] there were conjectured three forms of the tropical dual Nullstellensatz theorem. We state the most strong of them, effective Nullstellensatz theorem.

Conjecture 3 ([8]
). There is a function N of n and of deg(f i ) for 1 i k such that the system of polynomials F has a common tropical root iff the tropical linear system corresponding to the matrix C N has a solution.
Note that the classical analog of this statement is precisely the effective Nullstellensatz theorem in the dual form (see [8] for the detailed discussion).
In [8] the conjecture was proven for the case of n = 1. In this paper we prove the general case of the conjecture. (i) Over semiring R the system F has a solution iff the Cayley tropical linear system C N ⊙ y for has a solution.
(ii) Over semiring R ∞ the system F has a solution iff the Cayley tropical non-homogeneous linear system C N ⊙ y for N = poly(n, k) (2d) min(n,k) has a solution.
We note that we can also consider an infinite Cayley tropical linear system C ⊙ y. It makes sense since each row of C has only finite number of finite entries. As a trivial corollary of the previous theorem we deduce an infinite version of Tropical Dual Nullstellensatz. The result holds for both R and R ∞ semirings.
We show dual Nullstellensatz for min-plus case.
. . , f k = g k } of n variables. Denote by d i the degree of the polynomial f i = g i and let d = max i d i .
(i) Over semiring R the system F has a solution iff the Cayley min-plus linear system C N ⊙ y = D N ⊙ y for has a solution.
(ii) Over semiring R ∞ the system F has a solution iff the non-homogeneous Cayley min-plus linear system C N ⊙ y = D N ⊙ y for N = poly(n, k) (2d) min(n,k) has a solution.
As in the tropical case an infinite version of min-plus dual Nullstellensatz follows.
Corollary 7. Consider the system of min-plus polynomials F = {f 1 = g 1 , . . . , f k = g k } of n variables. The system F has a solution iff the infinite Cayley min-plus linear system C ⊙ y = D ⊙ y has a solution.
The result holds for both R and R ∞ semirings.
We provide examples showing that our bounds on N are qualitatively tight. Namely for the semiring R we construct a family F of (n + 1) tropical (or min-plus) polynomials of degree d such that F has no solution, but the Cayley tropical (or min-plus) linear system for N = (d − 1)(n − 1) has a solution. For the semiring R ∞ for any d > 1 we construct a system F of n + 1 tropical (or min-plus) polynomials of degree d such that F has no solution, but Cayley tropical (or min-plus) linear system for N = d n−1 − 1 has a solution.
We note that quantitatively there is a room for improvement between our lower and upper bounds on N . The gap is more substantial in the case of semiring R. Assuming for the sake of simplicity that n = k our upper bound gives approximately N dn 2 and our lower bound gives N dn. Thus we can formulate an open problem.
Open Problem. Close the gap between upper in lower bound on N in the tropical Nullstellensatz.
Next we establish Nullstellensatz in a more standard primary form. We start with a more intuitive min-plus Nullstellensatz.
Theorem 8 (Min-Plus Primary Nullstellensatz). Consider the system of min-plus polynomials F = {f 1 = g 1 , . . . , f k = g k } of n variables. Denote by d i the degree of the polynomial f i = g i and let d = max i d i .
Over semiring R the system F has no solution iff we can construct an algebraic min-plus combination f = g of degree at most of them such that for each monomial M = x ⊙j 1 1 ⊙ . . . ⊙ x ⊙jn n its coefficient in f is greater than its coefficient in g. In algebraic combination f = g we allow to use not only polynomials f i = g i , but also g i = f i .
Over semiring R ∞ the system F has no solution iff we can construct an algebraic combination f = g of degree at most N = poly(n, k) (2d) min(n,k) of them such that for each monomial M = x ⊙j 1 1 ⊙ . . . ⊙ x ⊙jn n its coefficient in f is greater than its coefficient in g and with additional property that the constant term in g is finite.
For the tropical case we will need the following definition.
is called nonsingular if the following two properties hold: • for each monomial M of g there is a (unique) 1 ≤ l(M ) ≤ m such that the coefficient of M at polynomial g l(M ) is less than the coefficients of M at all other polynomials g j for j = l(M ); • for different M and M ′ we have l(M ) = l(M ′ ).
Now we can formulate tropical Nullstellenstz in a primary form.
. . , f k = g k } of n variables. Denote by d i the degree of the polynomial f i and let d = max i d i .
The system F has no solution over R iff there is a nonsingular algebraic combination g for it of degree at most The system F has no solution over R ∞ iff there is a nonsingular algebraic combination g for it of degree at most N = poly(n, k) (2d) min(n,k) and with finite constant monomial.
For the proofs of the last two theorems we use min-plus and tropical linear duality.

Linear Duality
We prove the following result on linear min-plus duality.
Lemma 11. Let A, B ∈ R n×m ∞ be two matrices. For any subset S ⊆ [m] exactly one of the following is true.

There is a solution to
2. There is a solution to B T ⊙ y < A T ⊙ y such that for some i ∈ S the i-th coordinates of B T ⊙ y is finite.
For any subset S ⊆ [m] exactly one of the following is true.
1. There is a solution to A ⊙ x B ⊙ x such that for some i ∈ S the coordinate x i is finite.
2. There is a solution to B T ⊙ y < A T ⊙ y such that the i-th coordinates of B T ⊙ y are finite for all i ∈ S.
The proof of this lemma is based on the connection of min-plus linear systems with mean payoff games established in the paper [2]. Though the proof is rather simple as soon as one has this connection, we are not aware of the statement and the proof of these results in the literature.
As a simple corollary of this lemma we show the following simple formulation of min-plus linear duality.
Corollary 12. For two matrices A, B ∈ R n×m exactly one of the following is true.
For two matrices A, B ∈ R n×m ∞ exactly one of the following is true.
2. There is a finite solution to B T ⊙ y < A T ⊙ y.
For two matrices A, B ∈ R n×m ∞ exactly one of the following is true.

There is a finite solution to
Since the corollary follows from Lemma 11 almost immediately, we present the proof here.
Proof. If we consider tropical linear systems over R, then all coordinates of all vectors in Lemma 11 are finite and the corollary follows immediately no matter what S we fix.
For the second part of the corollary let S = [n] and apply the second part part of Lemma 11. Then the first property is lemma equivalent to the first property in corollary. To see that the equivalence holds also for the second property note that if for some y all coordinates of B T ⊙ y are finite, then we can assume that all coordinates of y are also finite. Indeed, if there are infinite coordinates in y we can just set them to constants large enough not to change the value of the minimum in each row.
The last part of the corollary can be shown analogously by letting S = [n] and applying the first part of Lemma 11.
We show similar result for tropical duality.
For any subset S ⊆ [m] exactly one of the following is true.
2. There is z such that in each row of A T ⊙ z the minimum is attained at least once or is equal to ∞, for each two rows with the finite minimum the minimums are in different columns and such that for some i ∈ S the i-th coordinate of A T ⊙ z is finite.
For any subset S ⊆ [m] exactly one of the following is true.
1. There is a solution to A ⊙ x such that for some i ∈ S the coordinate x i is finite.
2. There is z such that in each row of A T ⊙ z the minimum is attained at least once or is equal to ∞, for each two rows with the finite minimum the minimums are in different columns and the i-th coordinates of A T ⊙ z are finite for all i ∈ S.
This result can be proven either through reduction to min-plus linear systems, or through analysis of [9].
Just like in the case of min-plus linear systems we can get the following corollary.
Corollary 14. For a matrix A ∈ R n×m exactly one of the following is true.
1. There is a solution to A ⊙ x.

2.
There is z such that in each row of A T ⊙ z the minimum is attained only once and for each two rows the minimums are in different columns.
For a matrix A ∈ R n×m ∞ exactly one of the following is true.
1. There is a finite solution to A ⊙ x.

2.
There is z such that in each row of A T ⊙ z the minimum is attained only once or is equal to ∞ and for each two rows the (unique) minimums are in different columns.
For a matrix A ∈ R n×m ∞ exactly one of the following is true.
1. There is a solution to A ⊙ x.
2. There is a finite z such that in each row of A T ⊙ z the minimum is attained only once and for each two rows the minimums are in different columns.
The proof of this corollary is completely analogous to min-plus case.

Tropical vs. Min-plus
We also establish the connection between tropical and min-plus polynomial systems.
Lemma 15. For both R and R ∞ given a system of tropical polynomials we can construct a system of min-plus polynomials over the same set of variables and with the same set of solutions.
In the other direction we do not have such a simple connection, but we can still prove the following lemma.
Lemma 16. For any system of min-plus polynomials F over n variables there is a system of tropical polynomials T over 2n variables and an injective linear transformation H : R n ∞ → R 2n ∞ such that the image of the solutions of F coincides with the solution set of T . The same is true over semiring R.
The proof of this lemma follows the lines of the proof of the analogous statement for the case of linear polynomials in the paper [10].

Tropical Dual Nullstellensatz
First of all we fix some notation that we keep throughout the whole section. We assume that we are given a system of tropical polynomials This section is organized as follows. In Subsection 4.1 we will introduce required notation and show preliminary results. In Subsection 4.2 we give a proof outline. In Subsections 4.3 and 4.4 we give a proof for the case without ∞. In Subsection 4.5 we provide some clarifying examples. In Subsection 4.6 we prove the theorem for the case with ∞. Finally ,in Subsection 4.7 we show that upper bounds in our theorem are tight.

Preliminary definitions and results
Geometrical interpretation of tropical polynomials. Definition 17. For the set D ⊆ Z n and two functions f, g : We denote the set of points satisfying property 2 by Sing(f, g) and call them singularity points for the pair (f, g). If such t does not exist we let Sing(f, g) = ∅. We say that f is singular to g iff |Sing(f, g)| 2.
Geometrically, f is singular to g if we can adjust the graph of f in R n+1 space along the (n+1)-th coordinate in such a way that this graph lies below of the graph of g and has with it at least two common points.
Note that the notion of singularity is nonsymmetric. It might be that f is singular to g, but g is not singular to f .
The following lemma follows directly from the definition. .
In this paper we consider rows of the matrix C N , solutions to C N ⊙ y, coefficients of f i s. All of them constitute vectors a which coordinates are labeled by I ∈ D for some D ⊆ Z n + , that is by vectors with integer nonnegative coordinates. With such vector a we associate a function ϕ a : Z n → R ∞ letting ϕ a (I) = a I for I ∈ D and ϕ a (I) = ∞ for I / ∈ D. When the vector is the vector of coefficients of polynomial f we shortly denote the resulting function by ϕ f . When the vector is the vector of coefficients of polynomial f i ∈ F we simplify the notation even more to ϕ i . Note that due to the definition of C N if c is the row of C N labeled by (J, i) then In what follows we reserve Greek letters for the functions dealing with the coefficients of polynomials and entries of Cayley matrix to distinguish them from the functions f i .
The motivation for our notion of singularity is that it captures the solvability of tropical polynomials. Proof. Consider arbitrary vector c and corresponding tropical linear polynomial. The vector y is a root of this linear polynomial if the minimum in {ϕ y (I) + ϕ c (I)} I is attained at least twice. Let t be the minimal number such that ϕ y (I) + ϕ c (I) + t 0 for all I. Then ϕ y (I) + ϕ c (I) + t = 0 equals zero for at least two different Is. This means that −ϕ y (I) − t ϕ c (I) and equality holds for at least two points. Thus the function −ϕ y is singular to ϕ c .
The proof in the other direction follows the same lines.
In particular, the vector y is a solution to C N iff −ϕ y is singular to all ϕ c , where c is a row of C N . Now let c be a vector of coefficients of a tropical polynomial f , that is c I is the coefficient of the monomial x I in f . Then its solutions are given by the vectors x = (x 1 , . . . , x n ) and y described in the previous paragraph in this case is given by y I = x, I , that is the inner product of vectors x and I. Thus in this case ϕ y (I) = x, I is a linear function, defining a hyperplane in n + 1 dimensional space. We introduce the notation χ x = −ϕ y . Thus, from Lemma 19 we get the following result.

Lemma 20.
The vector x is a solution to f iff the hyperplane χ x is singular to the function ϕ f .
In particular, the system of polynomials F has a solution iff there is a hyperplane singular to ϕ i for all i = 1, . . . , k.
As a result we have that, if there is a hyperplane singular to all ϕ i for all i = 1, . . . , k, then it clearly provides a solution to C N . This shows the simple direction of the tropical dual Nullstellensatz theorem. What we need to show for the other direction is that if there is some function singular to all translations of all ϕ i s within some rectangle |I| N , then there is also a singular hyperplane.
For the proof of Theorem 4 it is convenient to use the language of polytopes. We summarize it in the next definition.
Definition 21. To switch to polytope notation for a polynomial f ∈ F we consider the graph of the function ϕ f : {(I, ϕ f (I)) | |I| N } and along with each point (I, ϕ f (I)) we consider all points (I, t) above it, that is such that t > ϕ f (I). We take the convex hull in R n+1 of all these points and call the resulting polytope P (f ) (extended) Newton polytope of f . We note that this construction is quite standard [14,23,25]. By the bottom of P (f ) we denote the set of points x = (x 1 , . . . , x n , x n+1 ) ∈ P (f ) such that there are no points of P (f ) below them, that is for any ǫ > 0 we have that (x 1 , . . . , x n , x n+1 −ǫ) / ∈ P (f ). Note that the bottom of P (f ) can be considered as a partial function on R n and it is not hard to see geometrically that the hyperplane is singular to ϕ f iff it is singular to the bottom of P (f ). This is not necessarily true for an arbitrary function ϕ a instead of a hyperplane. For the given system of polynomials f 1 , . . . , f k we denote the resulting convex polytopes by P 1 , . . . , P k .
To bring together functional language and the language of polytopes we introduce two more notations. For a function ϕ we denote by G(ϕ) the graph of the function in R n+1 . For the case of the functions ϕ i to make the notation more intuitive instead of G(ϕ i ) we write G(f i ). Also for the extended Newton polytope P we denote by β P : Z n → R the function, whose graph is given by a bottom of P . For the case of polytopes P i we shorten this notation to β i .
Remark. We note that in the paper [8] the conjecture on tropical dual Nullstellensatz was considered not for the original Cayley matrix, but for the Cayley matrix in which we already switch to the convex hull. Our proof works for both settings, but we consider it more natural to state it for the original Cayley matrix.
Convex polytopes. The convex polytope P in n-dimensional space can be specified by a set of linear functions Any facet of a polytope can be specified by the set S ⊆ {1, . . . , k}. The facet corresponding to S is the set of points x ∈ P such that L i ( x) = 0 for all i ∈ S.
Thus we will always assume that the polytopes are closed. If we would like to talk about open polytopes we talk about the polytope's interior instead. The same applies to the facets of polytopes, by default they are considered to be closed. We denote the interior of the polytope P by • P . For polytopes P 1 and P 2 we will denote by P 1 + P 2 Minkowski sum of these polytopes. For natural k we will use the notation kP = P + . . . + P where there are k summands on the righthand side. For an n-dimensional vector α we denote by P + α the translation of P by the vector α. That is, It is well known that if polytope P is similar to polytope Q then there is a homothety mapping P to Q. Throughout the section we will prefer to use the homothety notation. By the homothety with the center x ∈ R n and coefficient λ > 0 we denote the following bijective transformation of the space R n : the point y ∈ R n is sent to the point x + λ( y − x). We denote this transformation by h λ x . Definition 22. Consider polytope P , a set of points Q and a point x on the boundary of P . We say that Q touches P in x iff 3. if Q contains a point y on the boundary of P , then y lies in a facet of P containing x.
Below we collect some facts we will need on the structure of convex polytopes. Though they are simple and intuitive we give the proofs of them for the sake of completeness.
Lemma 23. Let P be a convex polytope and let x, y, z be the points in it lying on the same line in the specified order. Then if y belongs to some facet of P then x also belongs to the same facet.
Proof. Suppose on the contrary that y belongs to some facet and x does not. Then there is some inequality L among linear inequalities defining P such that L( x) > 0 and L( y) = 0. Then if we consider the values of L on the line containing x, y and z then it is a linear function there and thus clearly L( z) < 0. Therefore z is not in P and thus we have a contradiction.
Corollary 24. Let P be a convex polytope and let x, y, z, t be the points in it lying on the same line in the specified order. Then y belongs to some facet of P iff z belongs to the same facet.
Proof. Just apply Lemma 23 to the points y, z, t and to the points z, y, x.
Lemma 25. Let P be a convex polytope and let x be a point in P . Consider the transformation of P under h λ x for λ > 1. Denote the image of P under this transformation by P ′ . If P contains a point on some facet of P ′ then this facet contains x.
Proof. Let y be a point of P . Then the point z = h λ x ( y) = y + (λ − 1)( y − x) lies in P ′ . Thus by Lemma 23 if y is on some facet of P ′ then x is also on this facet.

The proof outline
The key idea is to consider a large "enveloping" polytope P 0 . The main property of P 0 we will ensure is that for each point x on its bottom and for any i we can translate the polytope P i in such a way that it touches P 0 in x.
It turns out that for P 0 we can take just a Minkowski sum of P 1 , . . . , P k multiplied by a large enough number.
We show that we can choose one of the singular points in Sing( a, ϕ P 0 ) in such a way that the facet containing this point gives a solution to the system F (Lemma 29).

The enveloping polytope
In this section we construct an enveloping polytope P 0 ⊆ R n+1 and prove its properties required for the proof of the theorem.
We just let where all operations on polytopes are in the sense of Minkowski sum. It is clear that for each P i we have that P 0 can be represented as the union of the translations of P i by the real vectors. However we will need that all integer points can be represented by integer translations of vertices of P i (we will actually need slightly more).
To prove this we will need some general fact on convex polytopes.
Lemma 26. Let P be an n-dimensional convex polytope and let P ′ = (n + 2)P . Then for each point x ∈ • P ′ there is a translation P + α with the following properties: 1. the center y of the homothety mapping P + α to P ′ lies in It is easy to see that the first property is equivalent to the fact that P + α ⊆ • P ′ , but the current form of the lemma will be more convenient for us.
The main tools in the proof of this lemma are Caratheodory's Theorem, the notion of the center of mass and homothety transformation.
Proof. We first give a proof sketch and then proceed to the detailed proof. Since x is in P ′ it lies in some simplex S ′ generated by n + 1 vertices of P ′ . For S ′ we consider each of its vertices and make a homothety with the center in it and the coefficient (n + 1)/(n + 2). The resulting (n + 1) simplices cover all S ′ (even with overlap). So x lies in one of them, say in the one defined by the vertex v ′ 1 . Then we can consider the translation S + α of the simplex S which is (n + 2) times smaller than S ′ such that its vertex corresponding to v ′ 1 is mapped into x. Then S + α lies in S ′ . Now we can consider P ′ and note that P + α is in P ′ . Formally this is proved via homothety. Now we give a formal proof following the outline above. Since x is a point in the convex polytope P ′ it lies in the convex hull of its vertices. By Caratheodory's Theorem there are n We denote this convex hull by S ′ . We denote the corresponding vertices of P by v 1 , . . . , v n+1 .
Let w 1 , . . . w n+1 be barycentric coordinates of x, that is w i 0 for all i, Without loss of generality let w 1 be the largest among w i . Then nw 1 Then and thus the points v ′ 1 , x and v ′ are on the same line. Moreover, | x − v ′ 1 | n|v ′ − x| < (n + 1)|v ′ − x| (observe that |v ′ − x| is nonzero since w 1 is nonzero since it is the largest weight).
Consider the homothety transformation of S ′ with the center in v ′ 1 and coefficient (n+1)/(n+2). Denote the image of S ′ by S 1 . Then from above we get that x ∈ S 1 . Now we can consider S = Conv{v 1 , . . . , v n+1 } and consider its translation S + α such that v 1 is placed to x. We have that S ′ is equal (up to a translation) to (n + 2)S and S 1 is equal (up to a translation) to (n + 1)S. Consider the image of the vector (v ′ 1 , x) (that is, the vector with the starting point v ′ 1 and endpoint x) under the homothety of S 1 to S + α and denote the resulting vector by (x, y). Then y ∈ S + α, | x − v ′ 1 | = (n + 1)| y − x| and thus | y − x| < |v ′ − x|. Therefore if we consider the homothety with the center y and coefficient (n + 2) then x is mapped into v ′ 1 and thus S + α is mapped into S ′ . Now we can consider the polytope P ′ and the translation P + α. For this translation we have that v 1 goes to x and thus x is a vertex of P + α. Once again the homothety with the center y and coefficient (n + 2) sends x to v ′ 1 and thus P + α to P ′ . Thus P + α lies in P ′ . It is only left to note that the points x, y, v ′ lie on the same line in the specified order and all lie in P ′ . Thus by Lemma 23 since x ∈ • P ′ we have y ∈ • P ′ .
Remark. We note that Lemma 26 does not hold for P ′ = (n + 1)P . The example is very simple, just let P be a standard simplex, that is a convex hull of points {0, e 1 , . . . , e n }. Then P ′ = (n + 1)P is a convex hull of points { 0, (n + 1) e 1 , . . . , (n + 1) e n }. Let x be the center of the polytope, that is x = e 1 + . . . + e n . Then for x to be a vertex of P + α we should have that either α = e 1 + . . . + e n , or α = e 1 + . . . + e i−1 + e i+1 + . . . + e n for some i. In the first case y = (n + 1)( e 1 + . . . + e n )/n and in the second case y = (n + 1)( e 1 + . . . + e i−1 + e i+1 + . . . + e n )/n. In both cases y lies on the boundary of P ′ : in the first case it is in the convex hull of { e 1 , . . . , e n } and in the second case it is in the convex hull of { 0, e 1 , . . . , e i−1 , e i+1 , . . . , e n }.

A facet of P 0 is singular
In this subsection we are going to finish the proof of Theorem 4(i). It will be convenient to introduce the notation P = P 1 + . . . + P k .

Lemma 27.
For any point x on the bottom of P 0 and for any f j there is β such that G(f j ) + β touches P 0 in x.
Proof. First we show that there is a translation of P touching P 0 in x.
If x is a vertex of P 0 then just note that there is a translation P + α lying inside of P 0 and containing x. Since x is a vertex of P 0 it is also a vertex of P + α. The homothety h n+2 x sends x as a vertex of P + α into x as a corresponding vertex of P 0 and thus sends P + α to P 0 . Then by Lemma 25 P + α touches P 0 in x.
If x is not a vertex of P 0 denote the minimal dimension facet of P 0 containing x by Q 0 . Clearly x is in the interior of Q 0 . Since P 0 = (n + 2)P we have that there is a facet Q of P such that Q 0 = (n + 2)Q. By Lemma 26 we can find a translation Q+ α such that x is a vertex of Q+ α and Q+ α ⊆ The vertex x goes under this homothety to the corresponding vertex of P 0 and thus P + α goes to P 0 . Note that by Lemma 25 we also get that P + α intersects P 0 only in the facets incident to y and thus only in the facets incident to x. Now note that P + α is the translation of Minkowski's sum of P 1 , . . . , P k , thus for each of P j there is a translation β such that P j + β is in P + α and contains the point x. Since this point is a vertex of P + α we have that x is a vertex of P j + β. Note that P j + β lies inside of P + α and thus also can intersect the boundary of P 0 only in the facets containing x.
Finally note that the set G(f j ) + β is a subset of P j + β, but on the other hand contains all its vertices. Thus G(f j ) + β touches P 0 in x.
For the sake of convenience we will throughout this subsection call the n + 1 dimensional vector α integer if its first n coordinates are integers. Analogously, we call a point in R n+1 integer if its first n coordinates are integers.
Corollary 28. Consider the bottom β P 0 of P 0 , consider the vector {a I } I corresponding to it, that is a I = β P 0 (I). Consider the tropical polynomial g = ⊙ I a I ⊕ x I . Then for each f j the polynomial g lies in a tropical ideal generated by f j .
Proof. It is easier to give a proof in geometric terms. For each integer point x on the bottom of P 0 consider the translation G(f j ) + α x touching P 0 in x. This translation corresponds to tropical multiplication of f j by monomial. Then it is easy to see that all integer points on the bottom of P 0 lie in the union of G(f j ) + α x for all x and on the other hand all other integer points of this union lie in P 0 . This union operation corresponds to the minimum operation for polynomials.
Lemma 29. Suppose the tropical system C N ⊙ y has a solution a.
(i) For the case of R there is a facet of P 0 such that some hyperplane containing it provides a solution to the tropical system F .
(ii) For the case of R ∞ if there is x ∈ Z n such that β P 0 ( x) = ∞ and ϕ a ( x) = ∞ then there is a facet of P 0 such that some hyperplane containing it provides a solution to the tropical system F .
Proof. Consider the functions ϕ a and β P 0 . Since the polynomial corresponding to β P 0 is in the ideal, generated by F and its degree is at most N , the vector {β P 0 (I)} I is a tropical linear combination of rows of C N . Thus a is a solution to the corresponding tropical linear equation. Since in both cases R and R ∞ there is x such that β P 0 ( x) = ∞ and ϕ a ( x) = ∞, we have that there is a singularity point in Sing(ϕ a , β P 0 ). Further proof works for both cases.
For each point x ∈ Sing(ϕ a , β P 0 ) consider the lowest dimension of the facets of P 0 to which the point ( x, β P 0 ( x)) belongs and further on denote by x the point in Sing(ϕ a , β P 0 ) which maximizes this minimal dimension. In simple words, we look for a singularity point in the most general position w.r.t. the polytope P 0 . Let us denote the minimal dimension facet of P 0 containing ( x, β P 0 ( x)) by Q 0 . Below we show that this is precisely the facet we are looking for.
Consider some polynomial f j . By Lemma 27 there is a vector α such that G(f j ) + α touches P 0 in ( x, β P 0 ( x)). Denote by g the function with the graph G(f j ) + α. Then, in particular, we have that x ∈ Sing(β P 0 , g). Since we also have x ∈ Sing(ϕ a , β P 0 ) clearly we have x ∈ Sing(ϕ a , g) (indeed, since x minimizes functions g − β P 0 and β P 0 − ϕ a , it also minimize their sum). However, recall that a is a solution to the system C N ⊙ y and g corresponds to one of the rows of C N . Thus |Sing(ϕ a , g)| 2. But any point minimizing g − ϕ a should also minimize g − β P 0 and β P 0 − ϕ a (since x does), thus any point in Sing(ϕ a , g) should be also in both Sing(ϕ a , β P 0 ) and Sing(β P 0 , g). In particular, there is at least one more point except x in Sing(β P 0 , g) and this means that there is another common point of G(f j ) + α and the bottom of P 0 .
Since G(f j ) + α touches P 0 in x we have that any other common point lies in a facet of P 0 incident to ( x, β P 0 ( x)). If it does not lie in the facet Q 0 , then the minimal dimension facet containing this point has larger dimension than the dimension of Q 0 and we get the contradiction with the maximality property of ( x, β P 0 ( x)). Therefore there are at least two common points of G(f j ) + α and Q 0 . Hence any hyperplane H going through Q 0 and not intersecting the interior of P 0 is singular to the function corresponding to G(f j ) + α and thus provides a solution to f j . Since the argument above works for all f j and Q 0 does not depend on f j , we get that H is singular to all f 1 , . . . , f k and thus defines a solution to the system F .

Examples
First we provide several examples illustrating why the case of n > 1 is substantially harder than the case n = 1.
Stepped pyramid In the case n = 1 it was actually shown in [8] that if we consider any solution to the infinite Cayley system then if we look onto the large enough coefficients, then in some natural sense they already form linear solution, thus directly providing the solution to the polynomial system. This is not the case already for two variables. Its convex hull is an upturned square right pyramidal frustum. Consider the polynomial system consisting of one polynomial f . For this system we will construct a solution which does not become linear no matter how far away we go from the origin.
It is easier to describe the continuous version of the solution. The discrete solution is defined by integer points of continuous solution.
Let S k = {(x, y)|10(k − 1) |x|, |y| 10k} for k = 1, 2, . . .. For each odd k we let the solution g : R 2 → R to be constant on S k . For each even k we divide S k into 4 regions by lines y = x and y = −x. On the region with x |y| we let g(x, y) = x + C, where C will be chosen later. Analogously for x −|y| we let g(x, y) = −x + C, for y |x| then g(x, y) = y + C and for y −|x| let g(x, y) = −y + C. We choose constants in these linear and constant functions in such a way that g is linear on the whole real plane. It is not hard to see that the graph of g is singular to the convex hull of G(f ).
Stripes Now we provide an example that the solution of Cayley system can behave wildly. Specifically, we describe almost everywhere "non-continuous" solution, that is the solution having arbitrary large gaps in the neighboring points. For this example also 2 variables are enough, that is n = 2.
Consider the polynomial f with The shape of the convex hull of this polynomial is a prism. Consider the set of points described by the following function g : R 2 → R: It is not hard to see that the graph of g is singular to the convex hull of G(f ). Thus the graph of g is a solution to the Cayley system corresponding to f . On the other hand note that the gaps in the graph of g grows with the growth of x.

Tropical Dual Nullstellensatz over R ∞
In this section we prove the following more precise version of Theorem 4(ii).
Theorem 30. In the semiring R ∞ the system of tropical polynomials F = {f 1 , . . . , f k } of degree at most d and n variables has a solution iff the nonhomogeneous Cayley tropical linear system C N for N = 2(n + 2) 2 k(2d) min(n,k)+2 has a solution.
Proof. Suppose we have a system of tropical polynomials F and consider the corresponding non-homogeneous Cayley linear system C N ⊙ y. If F has a solution then trivially C N ⊙ y also has a solution.
Suppose in the other direction that we have a solution a to the system C N ⊙ y. If for the enveloping polytope P 0 there is x ∈ Z n such that β P 0 ( x) = ∞ and ϕ a ( x) = ∞ then we can directly apply Lemma 29(ii). But initially we know only that ϕ a ( 0) = ∞ and it can be that β P 0 ( 0) = ∞ (and there is no translation P 0 + α of P 0 within Z n + such that β P 0 + α ( 0) = ∞). Below we describe how we solve this problem.
Consider the column of C N corresponding to the constant monomial. If it has no finite entry, the Cayley system has a solution just an infinite solution. At the same time the system of polynomials also has a solution again, just an infinite solution. Indeed, note that no polynomial in the system in this case has constant term. So, this case is simple and further we can assume that the column of C N corresponding to the constant monomial has a finite entry.
This means that there is a polynomial in F with a finite constant term. For simplicity of notation assume that it is f 1 . Now given a system of polynomials F we construct the system of polynomials F ′ such that 1. all polynomials in F ′ have finite free coefficients; 2. F ′ has a solution iff F also has a solution.
The idea is that for enveloping polytope Q for the system F ′ it is true that β Q ( 0) = ∞.
In the proof we will need the following value g = max Informally, it measures the maximal joint variation of ϕ-functions for the system F . We also need to assume that min I ϕ i (I) = 0 for all i. We can do this since adding a constant to each coefficient of a polynomial does not change singularity.
To construct F ′ we first for all i = 2, . . . , k and j = 1, . . . , n define polynomials of the following form: Here parameters C and α can be fixed in the following way: C = 2g(4d) 2 min(n,k)+2 , α = (4d) min(n,k)+2 . Next for all i > 1 we define Also for each i = 2, . . . , k and j = 1, . . . , n we introduce the polynomial That is, the difference between f ′ i and f ′ ij is that in the latter the coefficient of . . , k, j = 1, . . . , n}.
The tropical summands of the sum (6) we will call below by components of the polynomial f ′ i . We specifically distinguish f 1 -component. All other components are called f i -components. When we need to distinguish them, the component M ij will be called the j-th component.
Suppose C N ⊙ y has a solution a. Consider the non-homogeneous Cayley matrix C ′ N corresponding to F ′ . Note that all polynomials in F ′ are tropical algebraic combinations of polynomials in F . Thus the rows of C ′ N are tropical linear combinations of the rows of C N . Hence a is a solution of C ′ N ⊙ y. Now we can consider the polytopes P 1 , P ′ 2 . . . , P ′ K for the polynomials in the system F ′ and consider the enveloping polytope P ′ 0 .. Note that for each of functions f ∈ F ′ we have ϕ f ( 0) = ∞. Thus the same is true for the corresponding polytopes and for the enveloping polytope P ′ 0 as well. Therefore Lemma 29(ii) is applicable and we obtain a solution b = (−b 1 , . . . , −b n ) ∈ R n for F ′ . Note that we have N = (n + 2)K(α + d) (n + 2) 2 k2(4d) min(n,k)+2 .
However we need to show the Nullstellensatz for the original system F .
So, it is only remains to prove that F has a solution iff F ′ has a solution. One direction is simple: since F ′ consists of algebraic combinations of polynomials of F , any solution for F is also a solution for F ′ .
Thus it is left to show the following lemma.
Lemma 31. If there is a solution to the system F ′ then there is a solution to the system F .
The proof of this lemma has a geometric intuition, but it is not easy to see it behind the technical details. So, before proceeding with the proof we would like to explain this intuition in the case of n = 2 and k = 3. After that we provide a formal proof for the general case.
Informal proof for n = 2 and k = 3. Informally it is convenient to think of constants C and α as of very large numbers.
We first review the construction of F ′ . Recall that we assume that f 1 has finite constant term and for both polynomials f 2 and f 3 we introduce new polynomials f ′ 2 and f ′ 3 . It is instructive to look at the graph of the function ϕ f ′ 2 . It consists of the graph of ϕ 1 and of two copies of the graphs of ϕ 2 translated far away along each of the axes x 1 and x 2 and far below along the vertical axis. To explain the idea behind this construction we first note that since here we consider only the singularity with the hyperplane, it does not matter whether we consider the graph of the function ϕ f ′ 2 or the bottom of the corresponding polytope. The idea behind the construction of f ′ 2 is that when we consider a convex hull of the graph of ϕ f ′ 2 and construct the corresponding polytope P ′ 1 , all points of the polytope P 1 (corresponding to ϕ 1 ) except possibly the points on x 1 -axis and x 2 -axis go to the interior of the polytope P ′ 1 . We will explain presence of the polynomials f ′ 21 , f ′ 22 , f ′ 31 , f ′ 32 in F ′ once we need them.
Next we assume that there is a solution b = (−b 1 , −b 2 ) to the system F ′ . Recall, that the solution corresponds to the plane We would like to deduce that this hyperplane is also singular to functions ϕ 1 , ϕ 2 , ϕ 3 , corresponding to polynomials f 1 , f 2 , f 3 . We already know that it is singular to ϕ 1 since f 1 ∈ F ′ . To show that it is singular to ϕ 2 and ϕ 3 we look closer at polynomials f ′ 2 and f ′ 3 . Without loss of generality let us consider f ′ 2 . We know that our hyperplane has at least two singular points with ϕ f ′ 2 . First of all we would like to localize them: it would not be helpful if two singular points belong to different components of ϕ f ′ 2 . Thus, we would like to show that there are two singular points in one of the components of We note that if there is at least one singular point in f 1 -component, then there are two singular points there. It follows from the fact that the hyperplane is singular to ϕ 1 . The case when the hyperplane has only one singular point in one of f 2 -components is precisely the case, where we need polynomials f ′ 21 , f ′ 22 . Indeed, it is not hard to see that in this case one of these polynomials has only one singular point overall, and thus the hyperplane does not provide a solution to it.
Thus we have that each of the polynomials f ′ 2 and f ′ 3 has at least two singular points in the same component. If these are f 2 -component and f 3component respectively, then we are done: clearly, the hyperplane is singular to both ϕ 2 and ϕ 3 . Thus it is left to consider the case when one of the polynomials (or both) has two singular point in f 1 -component.
Here we encounter a serious obstacle. It can be that for the polynomials f ′ 2 and f ′ 3 (or for one of them) the singular points are in f 1 -component and the hyperplane is not singular to ϕ 2 and ϕ 3 . For example, assume that ϕ 2 and ϕ 3 have no finite points on the x 1 -axis. Then the hyperplane having two singular points with ϕ 1 on the x 1 -axis and decreasing dramatically along the x 2 -axis provides a solution to F ′ , but not to F .
Thus it is not always true that the solution F ′ constitutes a solution to F . However, in the example described above we can let b 2 = −∞ obtain the solution for F . It turns out that this trick with some additional work can fix the proof. Indeed, suppose that singular points of the hyperplane and, say, ϕ f ′ 2 are in f 1 -component. Then it is not hard to see that all these singular points lie on one of the axes x 1 or x 2 . Indeed, if there is a point with both positive coordinates, then it lies in P 1 which is inside of P ′ 1 and thus this point cannot be a singular point. If on the other hand, there are two points, one with positive x 1 -coordinate and the other with positive x 2 -coordinate, then the middle point between these two points has both positive coordinates and due to the convexity is still in P 1 .
Thus we can further assume that all singular points for f ′ 2 lie on one of the axes. Without loss of generality assume that it is x 1 -axis. Since there are at least two singular points on this axis in f 1 -component we have that b 1 is not too large and not too small, that is it is bounded by some value depending only on f 1 (and not on C and α). Since we are allowed to choose C as large as we want, this in particular means that Dom(ϕ 2 ) does not intersect We would like to stress here that at this point we have shown the theorem for the case k = 2. However we need one more observation for the case k = 3.
Consider the other polynomial f ′ 3 . If the domain of ϕ 3 also does not intersects the x 1 -axis, then like before we can just let b 2 = −∞.
Thus we can assume that there is a point in Dom(ϕ 3 ) on the x 1 -axis, denote this point by y. Then just like in the case of ϕ f ′ 2 the singular points of ϕ f ′ 3 are not in f 1 -component. Thus they are in some f 3 -components and thus χ b is singular to ϕ 3 itself. But we need to set b 2 = −∞ and the singularity might not survive during this. Consider the set Sing(χ b , ϕ 3 ). If there are at least two points in this set on x 1 -axis, then once again we can let b 2 = −∞ and obtain the solution to F . Thus, we can assume that there is only one singular point on x 1 -axis. Denote this point by z.
Consider both points y and z on the two-dimensional grid. To get from y to z in this grid we have to make several steps along x 1 -axis in positive or negative direction and at least one step in positive direction along x 2 -axis. During this path the value of χ b can not decrease substantially. Indeed, since z is a singular point the difference of the values of χ b is lower bounded by the difference of the value of ϕ 3 in the same points. Thus the maximal possible decrease along the path is upper bounded by some value depending only on ϕ 3 and thus only on f 3 . Since the value of b 1 is also bounded from above and from below from this we can deduce that b 2 is not too small, that is it is lower bounded by some value depending only on F . Now choosing C and α large enough we can get a contradiction with the fact that the singular points of ϕ f ′ 2 are in f 1 -component: both b 1 and b 2 are not two small and if we place f 2 components low enough the singular point will be in one of these components.
This proof (with some additional technical tricks) can be generalized to the general case.
Next we proceed to the formal proof of Lemma 31.
Proof of Lemma 31. The idea is to consider the solution of F ′ and replace some of its coordinates by infinity. Below we describe how to choose the appropriate set of these coordinates. The construction is rather straightforward: we only keep the coordinates we have to keep and the others replace by infinity.
Recall that b = (−b 1 , . . . , −b n ) ∈ R n is a solution of F ′ . As discussed in Section 4.1 this means that the hyperplane Note that for each polynomial f ′ i there are two singularity points in the same component. Indeed, if this is not the case consider a j-component with one singularity point and consider the polynomial f ′ ij . It has only one singularity point which is a contradiction (the same arguments works for f 1 -component: we should consider the polynomial f 1 in this case).
We will need one more notation: for the set T ⊆ R n and for the set S ⊆ [n] let T S be the set of points x ∈ T such that x j = 0 for all j / ∈ S. We will define the sequence of sets of coordinates in the following iterative way. First consider the set Sing(χ b , ϕ 1 ) of singularity points for χ b and ϕ 1 .
and Dom(ϕ i ) S l = ∅ then we define S l+1 letting S l ⊆ S l+1 and j ∈ S l+1 \ S l iff there is x ∈ Sing(χ b , ϕ i ) such that x j = 0. Thus we obtain the next set S l+1 .
This procedure results in a sequence S 0 , S 1 , . . . S r and in the corresponding sequence of polynomials g 1 , g 2 , . . . , g r , where for each l we have g l ∈ F . For the sake of convenience denote g 0 = f 1 . Note that r k, since each polynomial from F can appear in the sequence at most once. Also r n, since each S l is a subset of [n] and each next set is larger than the previous one. Thus r min(n, k).
We show the following claim.
Informally, if there is a very small b j , then there is rather large b j ′ .

Proof. We argue by induction on l.
For the case of S 0 consider the coordinate j with b j −2g. In this case consider x ∈ Sing(χ b , ϕ 1 ) such that x j = 0 (there is such an x by the definition of S 0 ). Consider Note that for all p we have x p 0 and x j > 0, so On the other hand note that p =j For the induction step consider the polynomial g l . If j ∈ S l−1 , we are done by induction hypothesis. Suppose j / ∈ S l−1 . By the definition of g l we have the following 1. There is y ∈ Dom(ϕ g l ) S l . In particular, y j = 0.
2. There is a singular point x ∈ Sing(χ b , ϕ g l ) such that x j = 0.
Let us break the sum on the left into two parts Note that in the second sum (x p − y p ) is nonnegative, since y p = 0. Since there is either p / ∈ S l−1 such that b p −b j /4d, or p ∈ S l−1 such that |b p | −b j /4d. In the first case we are done immediately and in the second case we are done by induction hypothesis.
After this procedure we fix all coordinates of solution not in S r to ∞, that is we let b j = −∞ if j / ∈ S r , thus obtaining the new vector b ′ . We claim that this results in the solution of F . Indeed, suppose there is a polynomial f i ∈ F such that there is only one z ∈ Sing(χ b ′ , ϕ i ). Clearly, z ∈ R n Sr . Moreover, no other point can be a singular point of the original hyperplane χ b with ϕ i . Indeed, other singular points can be only outside R n Sr and if there is at least one, then following our construction we would have added some more coordinates to S r . Thus there is only one singular point in Sing(χ b , ϕ i ) and as a result singular points of χ b with ϕ f ′ i are in f 1 -component. Let y ∈ Sing(χ b , ϕ f ′ i ) be one of these singular points. Note that by the definition of S 0 we have y ∈ R n S 0 ⊆ R n Sr . We are going to get a contradiction.
Let m = min j∈Sr b j and M = max j∈Sr b j . Consider j and j ′ such that b j = m and b j ′ = M . Consider j ′ -component of f ′ i and let x be the translation of z in this component, that is x = z + α · e j ′ . Since z ∈ R n Sr and j ′ ∈ S r we have x ∈ R n Sr . Our goal is to show that where the second equality follows from the definition of f ′ i and the last inequality follows from the definition of g (5). Thus it is enough to show that Note now that Consider the sum If M 0, then by Claim 1 we have m −2g(4d) r and thus M −2g(4d) r . We have If M 0 we have If m −2g(4d) r this sum is greater than −2g2d(4d) r .
If m −2g(4d) r then by Claim 1 M −m/(4d) r+1 and we have In all these cases χ b ( x) − χ b ( y) is greater than −C + g and we have a contradiction with the singularity of y. This finishes the proof of Theorem 30 and thus Theorem 4(ii).

Lower Bounds
In this subsection we provide examples showing that our bounds on N in Theorem 4 are not far from optimal. At the same time we provide the same lower bounds for Theorems 6, 8, 10.
Lower bound for Theorem 4(i) First we show a lower bound for the case of R. Namely for any d 2 we provide a family F of n+1 polynomials of degree at most d such that F has no solution, but the corresponding Cayley matrix of size (d − 1)(n − 1) has a solution.
The construction is an adaptation of the standard example for lower bound for classical Nullstellensatz due to Lazard, Mora and Philippon (unpublished, see [3,12]).
Consider the following set F of tropical polynomials It is not hard to see that this system has no solutions. Indeed, if there is a solution, then from f 1 we can see that x 1 = 0, then from f 2 we can see that x 2 = 0 etc., from f n we can see that x n = 0. However from f n+1 we have that x n = −1 which is a contradiction.
Thus it remains to show the Cayley tropical system with the matrix C (d−1)(n−1) corresponding to the system F has a solution.
Recall that the columns of C (d−1)(n−1) correspond to monomials. We associate an undirected graph G to the matrix C (d−1)(n−1) in a natural way. The vertices of G are monomials in variables x 1 , . . . , x n of degree at most (d−1)(n−1) (or, which is the same, the columns of C (d−1)(n−1) ). We connect two monomials by an edge if they are presented in the same polynomial of the form x I f i for i ∈ {1, . . . , n}. Or, to state it the other way, we connect two monomials if there is a row of C (d−1)(n−1) not corresponding to a polynomial f n+1 and such that elements in the columns corresponding to these monomials are both finite in this row.
We define the weight w(m) of a tropical monomial m in the following way. First, w(x i ) = d i−1 for all i = 1, . . . , n. Second, for all monomials m 1 and m 2 we let w(m 1 ⊙ m 2 ) = w(m 1 ) + w(m 2 ). That is, if m = x ⊙a 1 1 ⊙ . . . ⊙ x ⊙an n then w(m) = a 1 + a 2 d + . . . a n d n−1 .
It turns out that the following lemma holds.
Lemma 32. If for two monomials m 1 and m 2 we have w(m 1 ) kd n−1 and w(m 2 ) < kd n−1 for some integer k, then m 1 and m 2 are not connected in G.
Proof. Note that if two monomials are connected by an edge corresponding to one of the polynomials f 2 , . . . , f n , then their weights coincide. If they are connected by an edge corresponding to f 1 , then their weights differ by 1.
Note that for any k each monomial of weight kd n−1 −1 has degree at least (k−1)+(d−1)(n−1). Indeed, consider such a monomial m = x ⊙a 1 1 ⊙. . .⊙x ⊙an n of minimal degree. If there is i = 1, . . . , n − 1 such that a i d, then we can substitute a i by a i − d and a i+1 by a i+1 + 1 and obtain another monomial of the same weight but smaller degree. Thus (a 1 , . . . , a n−1 ) corresponds to d-ary representation of the residue of kd n−1 − 1 modulo d n−1 . So for all i = 1, . . . , n − 1 we have a i = d − 1 and thus a n = k − 1.
Due to the restriction on the degree in the graph G there is only one monomial of weight d n−1 − 1 and no monomials of weight kd n−1 − 1 for k > 1. Moreover, the unique monomial of weight d n−1 − 1 has maximal degree (d − 1)(n − 1) and thus is not connected to a monomial of higher weight by an edge.
From all this the lemma follows. Indeed, if monomials m 1 and m 2 are connected, then on the path between them there is an edge connecting monomials of weights kd n−1 − 1 and kd n−1 . However, as we have shown, this is impossible. Now we are ready to provide a solution to the Cayley system. For a monomial of weight kd n−1 +s, where s < d n−1 , set the corresponding variable to k. Note that due to Lemma 32 if two monomials are connected, then the values of the corresponding variables are the same. Thus for the rows corresponding to polynomials f 1 , . . . , f n are satisfied. Rows corresponding to f n+1 are satisfied since the weights of monomials differ by precisely d n−1 .
Remark. For the case of min-plus polynomials a straightforward adaptation works. Indeed, since in each polynomial there are only two monomials, the only way to satisfy them is to make their values equal. Thus it is enough to consider a system of polynomials F Lower bound for Theorem 4(ii) Now we show a lower bound for the case of R ∞ . The lower bound here is much stronger, which is completely correlated with the weakness of the upper bound for the case R ∞ . Consider the following system F of tropical polynomials of variables x 1 , . . . , x n , y.
This system clearly has no solutions. Indeed, we can consecutively show that all coordinates of a solution should be finite and then the polynomials f n and f n+1 give a contradiction. Now consider non-homogeneous Cayley system C d n−1 −1 . We are going to construct a solution for it. For a tropical monomial x a 1 1 . . . x an n y b let its weight be a 1 + da 2 + d 2 a 3 + . . . + d n−1 a n Note that the degree in y is not counted. Consider monomials whose y-degree coincides with their weight and let the corresponding solution coordinates be equal to 0. For all other monomials let the corresponding solution coordinates to be equal to ∞. Consider the graph on the coordinates of solution in which two coordinates are connected if the corresponding monomials appear in the same row of Cayley matrix C d n−1 −1 . It is not hard to see that all monomials on which our solution is finite constitute a connected component of the graph, containing zero coordinate. Moreover, due to the constraint on the size of the matrix, no monomials in this component contain x n variable. Thus, all rows of the Cayley matrix are satisfied.
Remark. For the min-plus case note that the same observation as in the case of R works. Just consider the system of min-plus polynomials

Tropical polynomial systems vs. Min-plus polynomial systems
In this section we show that there is a tight connection between systems of min-plus polynomials and systems of tropical polynomials. We will later use this connection to obtain min-plus dual Nullstellensatz. The reduction in one direction is simple.
Lemma 33. Over R for any given tropical polynomial system we can construct a system of min-plus polynomials over the same set of variables and with the same set of solutions. The same is true for the domain R ∞ .
Proof. Let A be some tropical polynomial system. For each of its polynomials we construct a min-plus polynomial system over the same set of variables which is equivalent to this tropical equation. For this let min{L 1 , L 2 , . . . , L m } be one of the polynomials of the system A, where L i s are monomials.
It is easy to see that the fact that the minimum in the expression above is attained at least twice is equivalent to the fact that for any i = 1, . . . , m it is true that These equations are already in min-plus form and thus we have that any tropical polynomial is equivalent to a system of min-plus polynomials. To get a min-plus system equivalent to the tropical system we just unite minplus systems for all polynomials of A.
Note that exactly the same analysis works for the case R ∞ .
In the reverse direction we do not have such a tight relation, but the reduction we show below still preserves many properties.
We first for a given min-plus polynomial system A construct a corresponding tropical polynomial system T and then prove the relation between A and T .
Let us denote variables of A by (x 1 , . . . , x n ). Our tropical polynomial system for each variable x i of A will have two corresponding variables x i and x ′ i , thus the set of variables of T will be (x 1 , . . . , x n , x ′ 1 , . . . , x ′ n ). Polynomial system T consists of polynomials of 3 types.
1. For each i = 1, . . . , n we add to T polynomial 2. Let min j M j ( x) = min l L l ( x) be arbitrary polynomial of A. For each l we add to T polynomial of the form 3. Symmetrically for each min-plus polynomial min j M j ( x) = min l L l ( x) in A and for each j we add to T tropical polynomial The construction of T is over, note that the maximal degree of polynomial in T is equal to the maximal degree of polynomial in A. Now we are ready to show how A and T are related. Note that the polynomial of T of the first type i . Thus all solutions of T lie in the image of H. If there is a solution a to A then it is easy to see that its image ( a, a) under H satisfies all polynomials of the second and the third type in T . Indeed, since min j M j ( a) = min l L l ( a), then there is j such that M j ( a) = min l L l ( a). Then the minimum in the corresponding tropical polynomials of the second type will be attained in monomials M j ( x) and M j ( x ′ ). The symmetric argument works for tropical polynomials of the third type.
If there is a solution of T , we already noted that it has the form ( a, a). Then it is not hard to see that for each min-plus polynomial of A we have min j M j ( a) = min l L l ( a). Indeed, since corresponding tropical polynomials of the second type are satisfied, we have that min j M j ( a) min l L l ( a). On the other hand, tropical polynomials of the third type guarantee that .
The proof works in both semirings R and R ∞ .
In particular, we have that tropical prevarieties and min-plus prevarieties are topologically equivalent.

Min-plus Dual Nullstellensatz
In this section we prove Theorem 6.
For this we will apply the results of the previous section to tropical dual Nullstellensatz.
We present the proof for the semiring R. Exactly the same proof works also for R ∞ .
As for the case of tropical dual Nullstellensatz, one direction of Theorem 6 is simple. If the system F has a solution, then Cayley min-plus linear system C N ⊙ y = D N ⊙ y also has a solution: just let each coordinate of y to be equal to the value of the corresponding monomial under the solution of F .
For the other direction, suppose the system C N ⊙ y = D N ⊙ y has a solution a. For the min-plus polynomial system F consider the corresponding tropical polynomial system T from the previous section. Let us denote by C (T ) N its Cayley matrix. We will show that the tropical linear system C (T ) N ⊙ z has a solution. From this by Theorem 4 it follows immediately that T has a solution, and from this by Lemma 11 we have that F has a solution.
Thus it is left to construct a solution to the tropical system C

Primary Tropical and Min-Plus Nullstellensätze
Now we will deduce primary forms of tropical and min-plus Nullstellensätze. We start with Theorem 8 which we restate for convenience. Over semiring R the system F has no solution iff we can construct an algebraic min-plus combination f = g of degree at most of them such that for each monomial M = x ⊙j 1 1 ⊙ . . . ⊙ x ⊙jn n its coefficient in f is greater than its coefficient in g. In algebraic combination f = g we allow to use not only polynomials f i = g i , but also g i = f i .
Over semiring R ∞ the system F has no solution iff we can construct an algebraic combination f = g of degree at most N = poly(n, k) (2d) min(n,k) of them such that for each monomial M = x ⊙j 1 1 ⊙ . . . ⊙ x ⊙jn n its coefficient in f is greater than its coefficient in g and with additional property that the constant term in g is finite.
Proof. We will use the min-plus linear duality for the proof of this theorem.
By Theorem 6(i) the system of polynomials F has no solution over R iff the corresponding Cayley linear system has no finite solution. This system is equivalent to the system of min-plus inequalities By Lemma 11 the fact that this system has no finite solution is equivalent to the fact that the dual system has no solutions in R n ∞ (recall that we allow for both sides to be infinite in some rows).
This system can be interpreted back in terms of polynomials. Indeed, note that now the columns of the matrices correspond to the equations of F multiplied by some x J and rows correspond to some monomials x I . Thus the solution to the system corresponds to the sum of equations of F multiplied by some monomials, such that each coefficient of the sum on the left side is smaller than the coefficient of the sum on the right side. The fact that we allow both sides to be infinite in some row corresponds to the fact that some monomials might be not presented in the sum. The fact that we allow infinite coordinates in the solution correspond to the fact that we do not have to use all polynomials of x I f j = x I g j in algebraic combination.
The proof of the second part of the theorem is almost the same. The only difference is that this time we should use nonuniform Cayley system, which results in a linear combination of polynomials with finite constant term. Now we proceed to Theorem 10 which we also restate here for convenience.
Theorem 10 (Restated from p. 13). Consider the system of tropical polynomials F = {f 1 = g 1 , . . . , f k = g k } of n variables. Denote by d i the degree of the polynomial f i and let d = max i d i .
The system F has no solution over R iff there is a nonsingular algebraic combination g for it of degree at most N = (n + 2) (d 1 + . . . + d k ) The system F has no solution over R ∞ iff there is a nonsingular algebraic combination g for it of degree at most N = poly(n, k) (2d) min(n,k) and with finite constant monomial.
Proof. By Theorem 4(i) the system of polynomials F has no solution over R iff the corresponding Cayley system C N ⊙ y has no finite solution.
By Lemma 13 this is equivalent to the fact that there is z ∈ R n ∞ such that in each row of C T N ⊙ z the minimum is attained only once or is equal to ∞ and for each two rows the minimums are in different columns. Recall, that each column in C T N corresponds to the polynomial x J f j and rows corresponds to the monomials x I in these polynomials. Thus z corresponds to the algebraic combination of polynomials of F and the properties of z described above are equivalent to the singularity of the corresponding algebraic combination.
The proof of the R ∞ case is completely analogous.
8 Linear duality in min-plus algebra.

Min-plus linear duality.
Below we show duality for min-plus linear systems.
Lemma 11 (Restated from p. 13). Let A, B ∈ R n×m ∞ be two matrices. For any subset S ⊆ [m] exactly one of the following is true.

There is a solution to
2. There is a solution to B T ⊙ y < A T ⊙ y such that for some i ∈ S the i-th coordinates of B T ⊙ y is finite.
For any subset S ⊆ [m] exactly one of the following is true.
1. There is a solution to A ⊙ x B ⊙ x such that for some i ∈ S the coordinate x i is finite.
2. There is a solution to B T ⊙ y < A T ⊙ y such that the i-th coordinates of B T ⊙ y are finite for all i ∈ S.
This lemma is based on the interpretation of min-plus linear systems as mean payoff games. Namely, given two matrices A and B in R n×m ∞ we construct a mean payoff game G. This connection between min-plus linear systems and mean payoff games was established in [2]. We present the details for the sake of completeness.
The game G can be described as follows. We are given a directed complete bipartite graph which vertices on the left side are r 1 , . . . , r n and vertices on the right side are c 1 , . . . , c m . Left-side vertices corresponds to the rows of matrices A and B and right-hand side vertices correspond to the columns of the matrices. From each vertex r i there is an edge to each vertex c j labeled by −a ij . From each vertex c j there is an edge to each vertex r i labeled by b ij . For the number labeling an edge (v, u) we will use the notation w(v, u). Thus w(r i , c j ) = −a ij and w(c j , r i ) = b ij . There are two players which we call row-player and column-player and which in turns are moving a token over a vertices of the graph. In the beginning of the game the token is placed to some fixed vertex. On each turn one of the two players moves the token to some other node of the graph. Each turn of the game is organized as follows. If the token is currently in some node r i then the column-player can move it to any node c j (column-player chooses a column). If, on the other hand, the token is in some node c j then the row-player can move the token to any node r i (row-player chooses a row). The game is infinite and the process of the game can be described by a sequence of nodes v 0 , v 1 , v 2 , . . . which the token visits. Note that v 0 = r 1 . The column-player wins the game if lim inf If this limit is negative then the row-player wins. If the limit is zero we have a draw. If some entries of matrices A, B are infinite we assume that there are no corresponding edges in the graph. Alternatively, we can assume that there are edges labeled by ∞ and the player following such edge losses immediately. The process of the game can be viewed in the following way. After each move of column-player he receives from row-player some amount −a ij and after each move of row-player he receives from a column player some amount −b ij . The goal of both players is to maximize their amount. If one of them can play in such a way that his amount grows to infinity as the game proceeds, then he wins. And if the amounts of the players always stay between some limits, then the result of the game is draw.
Note that if all entries of the matrices are finite the game have complete bipartite graph and it is easy to see that this property gives that the winner of the game does not depend on the starting position. The situation is different in the case of matrices with entries from R ∞ .
For this game G the following property holds. It is implicit in [2].
There is a finite solution to A ⊙ x B ⊙ x iff the column player has a non-losing strategy starting from any position.
There is a solution to A ⊙ x B ⊙ x (possibly including ∞ coordinates) iff the column player has a non-losing strategy starting from some position.
There is a solution to A ⊙ x B ⊙ x (possibly including ∞ coordinates) with finite coordinate x i iff the column player has a non-losing strategy starting from position c i .
There is a solution to A ⊙ x B ⊙ x (possibly including ∞ coordinates) such that the j-th coordinate of A ⊙ x is finite iff the column player has a non-losing strategy starting from position r j .
Proof. We always can add the same number to all coordinates of the solution. In particular we have that there is a solution to A ⊙ x B ⊙ x iff there is a solution such that all x j 0 and min j x j = 0.
We are going to show that the existence of such solution is equivalent to the existence of non-losing strategy for the column-player. The proof is very intuitive, but to make the intuition clear we have to explain what does x mean in terms of the game. To do this assume that column-player has a non-losing strategy starting from some position. We know that if the player follows the strategy, then his amount does not decrease to −∞. But it might become negative at some moments of the game. For an arbitrary vertex c j let us denote by x ′ j the minimal amount such that if the game starts in c j and the column player has x ′ j in the beginning then he can never go below zero. If in some position c j the column player has no winning strategy we naturally set x ′ j = ∞. It turns out that the vector x ′ is essentially a solution to the min-plus linear system. Indeed, suppose that the column-player has a non-losing strategy and consider x ′ corresponding to it. Assume that we are in position c j . Then for each move of row-player (j, i) there is a move of the column-player (i, k) such that the remaining amount of the column-player after these two moves is at least x ′ k (so he does not go below his budget). That is for each i and j there is a k such that x ′ j + b ij − a ik x ′ k or And this precisely mean that A ⊙ x ′ B ⊙ x ′ . Now, suppose that there is a solution x to min-plus linear system. Let us give the column-player the amount x j if the game starts in c j . Then reversing the argument we have that for each i and j there is a k such that x j + b ij − a ik x k . And this means that for each position c j and for each move (i, j) of the row-player there is a move (i, k) of the column-player, such that the debt of the column-player does not go below x k . Thus we have that the column-player indeed does not go below the debts x and thus does not lose the game if he chooses the described moves.
We have shown all statements except the last one. For this statement note that the column player does not lose in the position r j if he has a move to some position c i such that first, he does not lose immediately, and second, he does not lose in position c i . Thus a ij is finite and x i is finite. It is easy to reverse this argument.
Next we show one more property which seems to be the only new step towards min-plus linear duality.

Proposition 2.
There is a finite solution to A ⊙ x < B ⊙ x iff the column player has a winning strategy starting from any position.
There is a solution to A ⊙ x < B ⊙ x iff the column player has a winning strategy starting from some position.
There is a solution to A ⊙ x < B ⊙ x with finite coordinate x i iff the column player has a winning strategy starting from position c i .
There is a solution to A⊙ x < B ⊙ x such that the j-th coordinate of A⊙ x is finite iff the column player has a winning strategy starting from position r j .
Proof. Suppose there is a solution x to A⊙ x < B ⊙ x. Then for small enough positive ǫ there is a solution to A ⊙ x < (B − ǫ) ⊙ x, where we subtract ǫ from all entries of B. Then by Proposition 1 there is a non-losing strategy for the column player in the mean payoff game G ′ corresponding to the system A ⊙ x < (B − ǫ) ⊙ x. Let the column-player apply the same strategy to the game G corresponding to A ⊙ x < B ⊙ x. Then compared to the game G ′ after k moves column-player will have at least the value kǫ added to his amount. Since the amount of the column-player is bounded from below in G ′ it will grow to infinity in G. Thus in the game G the column player has a winning strategy.
For the other direction, assume that the column player has a winning strategy. Then if we add a small enough ǫ to all payoffs of the row-player, the column player will still have a winning strategy, which is in particular non-losing. Thus we have by Proposition 1 that there is a solution x to A ⊙ x (B − ǫ) ⊙ x, where we subtract ǫ from each entry of B. Clearly the very same x is a solution to A ⊙ x < B ⊙ x and we are done. Now to get the lemma it is only left to use a duality of mean payoff games.

Tropical duality
Suppose we are given a tropical linear system A ⊙ x for A ∈ R n×m and we are interested whether it has a solution. First of all it is known that if the number of variables is greater than the number of equations, then there is always a solution. So we can assume that n m. Next note that if we add the same number to all entries in some row of A then the set of solutions does not change. One simple obstacle to A ⊙ x having a solution is if we can add some numbers to all rows of A and possibly permute rows and columns in such a way that the minimums in the first m rows of the resulting matrix are attained just in entries (1, 1), (2, 2), . . . , (m, m). It is easy to see that if this is the case there is no solutions to A ⊙ x. It turns out that this is the only obstacle we have. We give a proof below however we note that this is already implicit in [9,16].
Lemma 13 (Restated from p. 14). Let A, ∈ R n×m ∞ be a matrix. For any subset S ⊆ [m] exactly one of the following is true.
1. There is a solution to A ⊙ x with finite coordinates x i with i ∈ S.

2.
There is z such that in each row of A T ⊙ z the minimum is attained at least once or is equal to ∞, for each two rows with the finite minimum the minimums are in different columns and such that for some i ∈ S the i-th coordinate of A T ⊙ z is finite.
For any subset S ⊆ [m] exactly one of the following is true.
1. There is a solution to A ⊙ x such that for some i ∈ S the coordinate x i is finite.
2. There is z such that in each row of A T ⊙ z the minimum is attained at least once or is equal to ∞, for each two rows with the finite minimum the minimums are in different columns and the i-th coordinates of A T ⊙ z are finite for all i ∈ S.
Proof. Given a tropical product of a matrix by a vector A ⊙ a, where A ∈ R n×m ∞ it is convenient to introduce values m i (A ⊙ a) for all i = 1, . . . , n which is equal to the number of the column in which the finite minimum in row i is situated (if there is one). If there are several minimums, m i (A ⊙ a) corresponds to the first one. When the matrix and the vector is clear from the context we simply write m i .
Denote by C i for i = 1, . . . , m the matrix in R n×m with 1 entries in i-th column and 0 entries in other columns. Denote by R i for i = 1, . . . , m the matrix in R m×n with 1 entries in i-th row and 0 entries in other rows. Note that R i = C T i . We will show the first part of the lemma. The proof of the second part is completely analogous.
Suppose we are given a matrix A ∈ R n×m and consider the tropical system A ⊙ x. As shown in paper [10] (cf. Section 5) x is a solution to it iff for all small enough ǫ > 0 x is a solution to the following min-plus system: By min-plus linear duality this system has a solution x with finite coordinates x i for i ∈ S if and only if the system A T A T · · · A T ⊙ y < A T + ǫR 1 A T + ǫR 2 · · · A T + ǫR m ⊙ y (9) has no solution y such that for some i ∈ S the i-th coordinate of A T A T · · · A T ⊙ y is finite.
On the right-hand side of (9) we have a block matrix with blocks A T +ǫR i . It is left to show that the system (9) has a specified solution iff there is z such that in each row of A T ⊙ z the minimum is attained at least once or is equal to ∞, for each two rows with the finite minimum the minimums are in different columns and for some i ∈ S the i-th coordinate of A T ⊙ z is finite. Note that if y is a solution then in each row i, where the minimum on the left-hand side is finite we have n(i − 1) < m i ni. Indeed, otherwise the minimum in this row in the left-hand side is not smaller. Thus if y is a solution then for each row i with the finite minimum there exists the column j i of the i-th block such that the minimum is attained in this column. Note, that for i 1 = i 2 with finite minimums we have m i 1 = m i 2 ( mod n). Otherwise rows i 1 , i 2 and columns j i 1 , j i 2 will form a 2 × 2 subsystem a i 1 ,j i 1 + ǫ a i 1 ,j i 1 a i 2 ,j i 1 a i 2 ,j i 1 + ǫ ⊙ y ni 1 +j i 1 y ni 2 +j i 1 , which has minimums in (1, 1) and (2, 2) entries which is impossible. Thus columns j i correspond to different columns of the matrix A T . Let us consider the tropical system A T ⊙ z and consider the vector z which finite coordinate are y ni+j i . For this z the minimum in each row is either infinite or is attained once and no two minimums are in the same column. Indeed, if the minimum is attained twice for some row, then clearly for the same row of (9) we will have equality. Note also that the ith coordinate of A T A T · · · A T ⊙ y is finite iff the ith coordinate of A T ⊙ z is finite.
In the other direction, suppose we have a z such that in each row A T ⊙ z the minimum is either infinite or is attained once and no two minimums are in the same column. Then we can consider the columns having minimums in them and set the corresponding coordinates of y to be equal to them. Other coordinates of y we can set to infinity. Then for any small enough ǫ we will have a solution of (9) and the ith coordinate of A T ⊙ z is finite iff the ith coordinate of A T A T · · · A T ⊙ y is finite.