Belief Propagation Guided Decimation on Random -XORSAT
Abstract
We analyse the performance of Belief Propagation Guided Decimation, a physics-inspired message passing algorithm, on the random -XORSAT problem. Specifically, we derive an explicit threshold up to which the algorithm succeeds with a strictly positive probability that we compute explicitly, but beyond which the algorithm with high probability fails to find a satisfying assignment. In addition, we analyse a thought experiment called the decimation process for which we identify a (non-) reconstruction and a condensation phase transition. The main results of the present work confirm physics predictions from [Ricci-Tersenghi and Semerjian: J. Stat. Mech. 2009] that link the phase transitions of the decimation process with the performance of the algorithm, and improve over partial results from a recent article [Yung: Proc. ICALP 2024].
Keywords and phrases:
random -XORSAT, belief propagation, decimation process, random matricesCategory:
Track A: Algorithms, Complexity and GamesFunding:
Amin Coja-Oghlan: supported by DFG CO 646/3, DFG CO 646/5 and DFG CO 646/6.Copyright and License:
![[Uncaptioned image]](x1.png)
and Gregory B. Sorkin; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Mathematics of computing Probability and statistics ; Mathematics of computing Combinatoric problems ; Mathematics of computing Combinatorics ; Mathematics of computing Probabilistic algorithmsEditors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele PuppisSeries and Publisher:

1 Introduction and results
1.1 Background and motivation
The random -XORSAT problem shares many characteristics of other intensely studied random constraint satisfaction problems (“CSPs”) such as random -SAT. For instance, as the clause/variable density increases, random -XORSAT possesses a sharp satisfiability threshold preceded by a reconstruction or “shattering” phase transition that affects the geometry of the set of solutions [2, 11, 16, 23]. As in random -SAT, these transitions appear to significantly impact the performance of certain classes of algorithms [6, 15]. At the same time, random -XORSAT is more amenable to mathematical analysis than, say, random -SAT. This is because the XOR operation is equivalent to addition modulo two, which is why a -XORSAT instance translates into a linear system over . In effect, -XORSAT can be solved in polynomial time by means of Gaussian elimination. In addition, the algebraic nature of the problem induces strong symmetry properties that simplify its study [3].
Because of its similarities with other random CSPs combined with said relative amenability, random -XORSAT provides an instructive benchmark. This was noticed not only in computer science, but also in the statistical physics community, which has been contributing intriguing “predictions” on random CSPs since the early 2000s [18, 21]. Among other things, physicists have proposed a message passing algorithm called Belief Propagation Guided Decimation (“”) that, according to computer experiments, performs impressively on various random CSPs [20]. Furthermore, Ricci-Tersenghi and Semerjian [24] put forward a heuristic analysis of on random -SAT and -XORSAT. Their heuristic analysis proceeds by way of a thought experiment based on an idealized version of the algorithm. We call this thought experiment the decimation process. Based on physics methods Ricci-Tersenghi and Semerjian surmise that the decimation process undergoes two phase transitions, specifically a reconstruction and a condensation transition. A key prediction of Ricci-Tersenghi and Semerjian is that these phase transitions are directly linked to the performance of the algorithm. Due to the linear algebra-induced symmetry properties, in the case of random -XORSAT all of these conjectures come as elegant analytical expressions.
The aim of this paper is to verify the predictions from [24] on random -XORSAT mathematically. Specifically, our aim is to rigorously analyse the algorithm on random -XORSAT, and to establish the link between its performance and the phase transitions of the decimation process. A first step towards a rigorous analysis of on random -XORSAT was undertaken in a recent contribution by Yung [25]. However, Yung’s analysis turns out to be not tight. Specifically, apart from requiring spurious lower bounds on the clause length , Yung’s results do not quite establish the precise connection between the decimation process and the performance of . One reason for this is that [25] relies on “annealed” techniques, i.e., essentially moment computations. Here we instead harness “quenched” arguments that were partly developed in prior work on the rank of random matrices over finite fields [3, 8].
Throughout we let and be integers and a positive real. Let and let be a random -XORSAT formula 333 Two random variables are equal in distribution if they have the same distribution functions. Here, follows a Poisson distribution with mean . with variables and random clauses of length . To be precise, every clause of is an XOR of precisely distinct variables, each of which may or may not come with a negation sign. The clauses are drawn uniformly and independently out of the set of all possibilities. Thus, equals the average number of clauses that a given variable appears in.
1.2 Belief Propagation Guided Decimation
The first result vindicates the predictions from [24] concerning the success probability of algorithm. sets its ambitions higher than merely finding a solution to the -XORSAT instance : the algorithm attempts to sample a solution uniformly at random. To this end assigns values to the variables of one after the other. In order to assign the next variable the algorithm attempts to compute the marginal probability that the variable is set to “true” under a random solution to the -XORSAT instance, given all previous assignments. More precisely, suppose has assigned values to the variables already. Write for their values, with representing “true” and “false”. Further, let be the simplified formula obtained by substituting for . We drop any clauses from that contain variables from only, deeming any such clauses satisfied. Thus, is a XORSAT formula with variables . Its clauses contain at least one and at most variables, as well as possibly a constant (the XOR of the values substituted in for ).
Let be a uniformly random solution of the XORSAT formula , assuming that remains satisfiable. Then aims to compute the marginal probability that a random satisfying assignment of sets to true. This is where Belief Propagation (“BP”) comes in. An efficient message passing heuristic for computing precisely such marginals, BP returns an “approximation” of . We will recap the mechanics of BP in Section 2.2 (the value is defined precisely in (2.9)). Having computed the BP “approximation”, proceeds to assign the value “true” with probability , otherwise sets to “false”, then moves on to the next variable. The pseudocode is displayed as Algorithm 1.
Let us pause for a few remarks. First, if the BP approximations are exact, i.e., if is satisfiable and for all , then Bayes’ formula shows that outputs a uniformly random solution of . However, there is no universal guarantee that BP returns the correct marginals. Accordingly, the crux of analysing is precisely to figure out whether this is the case. Indeed, the heuristic work of [24] ties the accuracy of BP to a phase transition of the decimation process thought experiment, to be reviewed momentarily.
Second, the strategy behind the algorithm, particularly the message passing heuristic for “approximating” the marginals, generalizes well beyond -XORSAT. For instance, the approach applies to -SAT verbatim. That said, due to the algebraic nature of the XOR operation, is far easier to analyse on -XORSAT. In fact, in XORSAT the marginal probabilities are guaranteed to be half-integral as seen in Fact 2.2, i.e.,
(1.1) |
As a consequence, on XORSAT the algorithm effectively reduces to a purely combinatorial algorithm called Unit Clause Propagation [18, 24] as per Proposition 11, a fact that we will exploit extensively (see Section 2.7).
1.3 A tight analysis of
In order to state the main results we need to introduce a few threshold values. To this end, given and an additional real parameter that depends on the time , consider the functions 444The function is known in physics parlance as the “Bethe free entropy” [8, 18]. The stationary points of coincide with the fixed points of , as we will verify in Section 2.1.
(1.2) | ||||||
(1.3) |
Let be the smallest and the largest fixed point of . Figure 2 visualizes for different values of . Further, define
(1.4) | ||||
(1.5) |
The value is the random -XORSAT satisfiability threshold [3, 11, 23]. Thus, for the random -XORSAT formula possesses satisfying assignments w.h.p., while is unsatisfiable for w.h.p. Furthermore, equals the threshold for the emergence of a giant 2-core within the -uniform hypergraph induced by [3, 22]. This implies that for the set of solutions of is connected in a certain well-defined way, while for the set of solutions shatters into an exponential number of well-separated clusters [15, 18]. Moreover, a simple linear time algorithm is known to find a solution w.h.p. for [15]. The relevance of will emerge in Theorem 1. A bit of calculus reveals that
(1.6) |
The following theorem determines the precise clause-to-variable densities where succeeds/fails. To be precise, in the “successful” regime does not actually succeed with high probability, but with an explicit probability strictly between zero and one, which is displayed in Figure 2 for .
Theorem 1.
Let .
-
(i)
If , then
(1.7) -
(ii)
If , then
Theorem 1 vindicates the predictions from Ricci-Tersenghi and Semerjian [24, Section 4] as to the performance of , and improves over the results from Yung [25]. Specifically, Theorem 1 (i) verifies the formula for the success probability from [24, Eq. (38)]. Combinatorially, the formula (1.7) results from the possible presence of bounded length cycles (so called toxic cycles) that may cause the algorithm to run into contradictions. This complements Yung’s prior work, that has no positive result on the performance of . Moreover, Yung’s negative results [25, Theorems 2–3] only apply to and to , while Theorem 1 (ii) covers all and kicks in at the correct threshold predicted in [24].
1.4 The decimation process
In addition to the algorithm itself, the heuristic work [24] considers an idealised version of the algorithm, the decimation process. This thought experiment highlights the conceptual reasons behind the success/failure of . Just like , the decimation process assigns values to variables one after the other for good. But instead of the BP “approximations” the decimation process uses the actual marginals given its previous decisions. To be precise, suppose that the input formula is satisfiable and that variables have already been assigned values in the previous iterations. Obtain by substituting the values for and dropping any clauses that do not contain any of . Thus, is a XORSAT formula with variables . Let be a random satisfying assignment of . Then the decimation process sets according to the true marginal , thus ultimately returning a uniformly random satisfying assignment of .
Clearly, if indeed the BP “approximations” are correct, then the decimation process and are identical. Thus, a key question is for what parameter regimes the two process coincide or diverge, respectively. As it turns out, this question is best answered by parametrise not only in terms of the average variable degree , but also in terms of the “time” parameter of the decimation process.
1.5 Phase transitions of the decimation process
Ricci-Tersenghi and Semerjian heuristically identify several phase transitions in terms of and that the decimation process undergoes. We will confirm these predictions mathematically and investigate how they relate to the performance of .
The first set of relevant phase transitions concerns the so-called non-reconstruction property. Roughly speaking, non-reconstruction means that the marginal is determined by short-range rather than long-range effects. Since Belief Propagation is essentially a local algorithm, one might expect that the (non-)reconstruction phase transition coincides with the threshold up to which succeeds; cf. the discussions in [5, 16].
To define (non-)reconstruction precisely, we associate a bipartite graph with the formula . The vertices of this graph are the variables and clauses of . Each variable is adjacent to the clauses in which it appears. For a (variable or clause) vertex of let be the set of neighbours of in .More generally, for an integer let be the set of vertices of at shortest path distance precisely from . Following [16], we say that has the non-reconstruction property if
(1.8) | ||||
Conversely, has the reconstruction property if
(1.9) | ||||
To parse (1.8), notice that in the left probability term we condition on both the outcome of the first steps of the decimation process and on the values that the random solution assigns to the variables at distance exactly from . By contrast, in the right probability term we only condition on . Thus, the second probability term matches the probability from the decimation process. Hence, (1.8) compares the probability that a random solution sets to one given the values of all variables at distance from with plain marginal probability that is set to one. What (1.8) asks is that these two probabilities be asymptotically equal in the limit of large , with high probability over the choice of and the prior steps of the decimation process.
Confirming the predictions from [24], the following theorem identifies the precise regimes of where (non-)reconstruction holds. To state the theorem, we need to know that for the polynomial has precisely two roots ; we are going to prove this as part of Proposition 4 below. Let
(1.10) | ||||
(1.11) | ||||
(1.12) |
Additionally, let be the solution to the ODE
(1.13) |
on and set . Note that .
Theorem 2.
Let and let be a sequence such that .
-
(i)
If , then has the non-reconstruction property w.h.p.
-
(ii)
If and or , then has the non-reconstruction property w.h.p.
-
(iii)
If and , then has the reconstruction property w.h.p.
Theorem 2 shows that marks the precise threshold of up to which the decimation process exhibits non-reconstruction for all w.h.p. By contrast, for there is a regime of where reconstruction occurs. In fact, as Proposition 4 shows, for we have and thus reconstruction holds even at , i.e., for the original, undecimated random formula . Prior to the contribution [24], it had been suggested that this precise scenario (reconstruction on the original problem instance) is the stone on which stumbles [5]. In fact, Yung’s negative result kicks in at this precise threshold . However, Theorems 1 and 2 show that matters are more subtle. Specifically, for reconstruction, even though absent in the initial formula , occurs at a later “time” as decimation proceeds, which suffices to trip up. Also, remarkably, Theorem 2 shows that non-reconstruction is not “monotone”. The property holds for and then again for , but not on the interval as visualised in Figure 3.
But there is one more surprise. Namely, Theorem 2 (ii) might suggest that for Belief Propagation manages to compute the correct marginals for , as non-reconstruction kicks back in. But remarkably, this is not quite true. Despite the fact that non-reconstruction holds, goes astray because the algorithm starts its message passing process from a mistaken, oblivious initialisation. As a consequence, for the BP “approximations” remain prone to error. To be precise, the following result identifies the precise “times” where BP succeeds/fails. To state the result let denote the BP “approximation” of the true marginal of variable in the formula created by the decimation process (see Section 2.2 for a reminder of the definition). Also recall that denotes the correct marginal as used by the decimation process.
Theorem 3.
Let and let be a sequence such that .
-
(i)
If then w.h.p.
-
(ii)
If and or , then w.h.p.
-
(iii)
If and , then
The upshot of Theorems 2–3 is that the relation between the accuracy of BP and reconstruction is subtle. Everything goes well so long as as non-reconstruction holds throughout and the BP approximations are correct. But if and , then Theorem 2 (iii) shows that reconstruction occurs. Nonetheless, Theorem 3 (ii) demonstrates that the BP approximations remain valid in this regime. By contrast, for we have non-reconstruction by Theorem 2 (iii), but Theorem 3 (iii) shows that BP misses its mark with a non-vanishing probability. Finally, for everything is in order once again as BP regains its footing and non-reconstruction holds. Unfortunately is unlikely to reach this happy state because the algorithm is bound to make numerous mistakes at times .
Theorems 2 and 3 confirm the predictions from [24, Section 4]. To be precise, while matches the predictions of Ricci-Tersenghi and Semerjian, the ODE formula (1.13) for the threshold, which is easy to evaluate numerically, does not appear in [24]. Instead of the ODE formulation, Ricci-Tersenghi and Semerjian define as the (unique) such that ; Proposition 4 below shows that both are equivalent. Illustrating Theorems 2–3, Figure 3 displays the phase diagram in terms of and for .
2 Overview
This section provides an overview of the proofs of Theorems 1–3. In the final paragraph we conclude with a discussion of further related work. We assume throughout that is an integer and that . Moreover, denotes an integer sequence such that .
2.1 Fixed points and thresholds
The first item on our agenda is to study the functions from (1.2)–(1.3). Specifically, we are concerned with the maxima of and the fixed points of , the combinatorial relevance of which will emerge as we analyse and the decimation process. We begin by observing that the fixed points of are precisely the stationary points of .
Fact 3.
For any the stationary points of coincide with the fixed points of in . Furthermore, for a fixed point of we have
(2.1) |
We recall that are the smallest and the largest fixed point of in , respectively. Fact 2.1 shows that attains its global maximum in at or . Let be the maximiser of ; if , set . An example for and is visualised in Figure 4. The following proposition characterises the fixed points of and the maximiser .
Proposition 4.
-
(i)
If , then for all we have , the function is analytic, and is the unique stable fixed point of .
-
(ii)
If , then the polynomial has precisely two roots , the numbers from (1.10) satisfy and the following is true.
-
(a)
If or , then is the unique stable fixed point of .
-
(b)
If , then are the only stable fixed points of .
-
(c)
The functions and are analytic.
-
(d)
If , then the solution of (1.13) satisfies and if while if .
-
(a)
2.2 Belief Propagation
Having done our analytic homework, we proceed to recall how Belief Propagation computes the “approximations” that the algorithm relies upon. We will see that due to the inherent symmetries of XORSAT the Belief Propagation computations simplify and boil down to a simpler message passing process called Warning Propagation. Subsequently we will explain the connection between Warning Propagation and the fixed points of .
It is probably easiest to explain BP on a general XORSAT instance with a set of variables and a set of clauses of lengths between one and . As in Section 1.5 we consider the graph induced by , with vertex set and an edge between and iff contains . Let be the set of neighbours of . Additionally, given an assignment of the variables that appear in , we write iff satisfies .
With each clause/variable pair such that Belief Propagation associates two sequences of “messages” , directed from to and from to , respectively. These messages are probability distributions on , i.e.,
(2.2) | ||||
(2.3) |
The initial messages are uniform, i.e.,
(2.4) |
Further, the messages at step are obtained from the messages at step via the Belief Propagation equations
(2.5) | ||||
(2.6) |
In (2.5)–(2.6) the -symbol represents the normalisation required to ensure that the updated messages satisfy (2.3). In the case of (2.6) such a normalization may be impossible because the expressions on the r.h.s. could vanish for both and . In this event we agree that
in other words, we retain the messages from the previous iteration unless its value was , in which case we set . The same convention applies to . Further, at any time the BP messages render a heuristic “approximation” of the marginal probability that a random solution to the formula sets a variable to :
(2.7) |
We set if .
Fact 4.
The BP messages and marginals are half-integral for all , i.e., for all and we have
(2.8) |
Furthermore, for all we have .
Finally, in light of Fact 2.2 it makes sense to define the approximations for by letting
(2.9) |
2.3 Warning Propagation
Thanks to the half-integrality (2.8) of the messages, Belief Propagation is equivalent to a purely combinatorial message passing procedure called Warning Propagation (“WP”) [18]. Similar as BP, WP also associates two message sequences with every adjacent clause/variable pair. The messages take one of three possible discrete values (“frozen”, “uniform”, “null”). Essentially, indicates that the value of a variable is determined by unit clause propagation. Moreover, indicates that a variable is forced to take the value once all variables in the -core of the hypergraph representation of the formula are set to . The remaining label indicates that neither of the above applies. To trace the BP messages from Section 2.2 actually only the two values would be necessary. However, the third value will prove useful in order to compare the BP approximations with the actual marginals. Perhaps unexpectedly given the all-uniform initialisation (2.4), we launch WP from all-frozen start values:
(2.10) |
Subsequently the messages get updated according to the rules
(2.11) | ||||
(2.12) |
In addition to the messages we also define the mark of variable node as in (2.11), or be it without omitting clause . The following statement summarises the relationship between BP and WP.
Fact 4.
For all and all we have
(2.13) | |||||||
(2.14) | |||||||
(2.15) |
Moreover, for all we have and .
Fact 2.3 implies that the WP messages and marks “converge” in the limit of large , in the sense that eventually they do not change any more. Let be these limits. Furthermore, let , , be the sets of variables with the respective mark after iterations. Also let be the sets of variables where the limit takes the respective value. The following statement traces WP on the random formula produced by the decimation process.
Proposition 5.
Let and assume that , satisfy one of the following conditions:
-
(i)
, or
-
(ii)
and .
Then there exists such that for any fixed with w.h.p. we have
(2.16) | ||||||
(2.17) |
2.4 The check matrix
Since the XOR operation is equivalent to addition modulo two, a XORSAT formula with variables and clauses translates into a linear system over , as follows. Let be the -matrix over whose -entry equals one iff variable appears in clause . Adopting coding parlance, we refer to as the check matrix of . Furthermore, let be the vector whose th entry is one plus the sum of any constant term and the number of negation signs of clause mod two. Then the solutions of the linear system are precisely the satisfying assignments of .
The algebraic properties of therefore have a direct impact on the satisfiability of . For example, if has rank , we may conclude immediately that is satisfiable. Furthermore, the set of solutions of is an affine subspace of (if non-empty). In effect, if is satisfiable, then the number of satisfying assignments equals the size of the kernel of . Hence the nullity of the check matrix is a key quantity.
Indeed, the single most significant ingredient towards turning the heuristic arguments from [24] into rigorous proofs is a formula for the nullity of the check matrix of the XORSAT instance from the decimation process. To unclutter the notation set . We derive the following proposition from a recent general result about the nullity of random matrices over finite fields [8, Theorem 1.1]. The proposition clarifies the semantics of the function and its maximiser . In physics jargon is known as the Bethe free entropy.
Proposition 6.
Let and . Then
2.5 Null variables
Proposition 6 enables us to derive crucial information about the set of satisfying assignments of . Specifically, for any XORSAT instance with variables let be the set of variables such that for all . We call the variables null variables. Since the set of solutions of , if non-empty, is a translation of , any two solutions of set the variables in to exactly the same values. The following proposition shows that WP identifies certain variables as null.
Proposition 7.
W.h.p. the following two statements are true for any fixed integer .
-
(i)
We have .
-
(ii)
We have .
Propositions 6 and 7 enable us to calculate the number of null variables of , so long as we remain clear of the point where is discontinuous.
Proposition 8.
If then w.h.p.
Let us briefly summarise what we have learned thus far. First, because all Belief Propagation messages are half-integral, BP reduces to WP. Second, Proposition 5 shows that the fixed points of determine the number of variables marked or by WP. Third, the function and its maximiser govern the nullity of the check matrix and thereby the number of null variables of . Clearly, the null variables are precisely the ones whose actual marginals are not uniform. As a next step, we investigate whether BP/WP identify these variables correctly.
In light of Proposition 5, in order to investigate the accuracy of BP it suffices to compare the numbers of variables marked by WP with the true marginals. The following corollary summarises the result.
Corollary 9.
For any , the following statements are true.
-
(i)
If , or and , or and , then
-
(ii)
If and , then w.h.p.
Thus, so long as or and or , the BP/WP approximations are mostly correct. By contrast, if and , the BP/WP approximations are significantly at variance with the true marginals w.h.p. Specifically, w.h.p. BP deems frozen variables unfrozen, thereby setting itself up for failure. Indeed, Corollary 9 easily implies Theorem 3, which in turn implies Theorem 1 (ii) without much ado.
In addition, to settle the (non-)reconstruction thresholds set out in Theorem 2 we need to investigate the conditional marginals given the values of variables at a certain distances from as in (1.8). This is where the extra value from the construction of WP enters. Indeed, for a XORSAT instance with variables and an integer let be the set of variables such that for all and for all variables . Hence, is the set of variables whose -neighbourhood is contained in .
Corollary 10.
Assume that and let .
-
(i)
If , then for any fixed we have w.h.p.
-
(ii)
If , then there exists such that for any fixed we have
Comparing the number of actually frozen variables with the ones marked by WP, we obtain Theorem 2.
2.6 Proving successful
We are left to prove Theorem 1. First, we need to compute the (strictly positive) success probability of for . At this point, the fact that has a fair chance of succeeding for should not come as a surprise. Indeed, Corollary 9 implies that the BP approximations of the marginals are mostly correct for , at least on the formula created by the decimation process. Furthermore, so long as the marginals are correct, the decimation process and the execution of the algorithm move in lockstep. The sole difficulty in analysing lies in proving that the estimates of the algorithm are not just mostly correct, but correct up to only a bounded expected number of discrepancies over the entire execution of the algorithm. To prove this fact we combine the method of differential equations with a subtle analysis of the sources of the remaining bounded number of discrepancies. These discrepancies result from the presence of short (i.e., bounded-length) cycles in the graph . Finally, the proof of the second (negative) part of Theorem 1 follows by coupling the execution of with the decimation process, and invoking Theorem 3. In the next subsection we introduce a simple combinatorial Unit Clause Propagation algorithm to give a glimpse of the proof of the ’positive’ part for the success probability of Theorem 1 for . The proof of the second part of the theorem concerning as well as the details of both arguments can be found in the full version.
2.7 Unit Clause Propagation
The simple-minded Unit Clause Propagation algorithm attempts to assign random values to as yet unassigned variables one after the other. After each such random assignment the algorithm pursues the “obvious” implications of its decisions. Specifically, the algorithm substitutes its chosen truth values for all occurrences of the already assigned variables. If this leaves a clause with only a single unassigned variable, a so-called “unit clause”, the algorithm assigns that variable so as to satisfy the unit clause. If a conflict occurs because two unit clauses impose opposing values on a variable, the algorithm declares that a conflict has occurred, sets the variable to false and continues; of course, in the event of a conflict the algorithm will ultimately fail to produce a satisfying assignment. The pseudocode for the algorithm is displayed in Algorithm 3.
Let denote the simplified formula obtained after the first iterations (in which the truth values chosen for and any values implied by Unit Clauses have been substituted). We notice that the values assigned during Steps 6–12 are deterministic consequences of the choices in Step 5. In particular, the order in which unit clauses are processed Steps 6–12 does not affect the output of the algorithm.
Proposition 11.
We have
2.8 The success probability of for
We continue to denote by the sub-formula obtained after the first iterations of . Let be the set of variables of the XORSAT instance F. Also, let be the set of variables of . Thus, contains those variables among whose values are not implied by the assignment of via unit clauses. Also let be the set of clauses of ; these clauses contain variables from only, and each clause contains at least two variables. Let be the set of assigned variables. Thus, after its first iterations has constructed an assignment . Moreover, let be the set of variables that receive values in the course of the iteration for . Additionally, let be the set of clauses of that consists of variables from only. Finally, let be the formula comprising the variables and the clauses .
To characterise the distribution of let and let be the number of clauses of length , i.e., clauses that contain precisely variables from . Observe that because unit clauses get eliminated. Let be the -algebra generated by and .
Fact 11.
The XORSAT formula is uniformly random given . In other words, the variables that appear in each clause are uniformly random and independent, as are their signs.
Proof.
This follows from the principle of deferred decisions.
We proceed to estimate the random variables . Let so that . Recall, that . Let with and recall that denotes the smallest fixed point of . The proof of the following proposition proof can be found in the full version.
Proposition 12.
Suppose that . There exists a function such that for all and all we have
(2.18) |
Proposition 12 paves the way for the actual computation of the success probability of . Let be the event that a conflict occurs in iteration . The following proposition gives us the correct value of w.h.p. Since is a random variable the value for the probability is random as well.
Proposition 13.
Fix , let and define
(2.19) |
Then with probability we have
The proof of Proposition 13 can be found in Section 2.8.1. Moreover, in the full version we prove the following.
Proposition 14.
Fix and . For any we have
(2.20) |
Finally, the following statement, proven in the full version, deals with the final steps of the algorithm.
Proposition 15.
For any there exists such that
Proof of Theorem 1 (i).
Pick , fix a small enough and let be the total number of times at which conflicts occur. Proposition 11 shows that the probability that succeeds equals . In order to calculate , let be the number of failures before time . Proposition 14 shows that for any fixed we have
(2.21) |
Hence, the inclusion/exclusion principle (e.g., [4, Theorem 1.21]) implies that
(2.22) |
Further, using Proposition 13 and the linearity of expectation, we obtain with
(2.23) |
Finally, Proposition 15 implies that
(2.24) |
Thus, the assertion follows from (2.22)–(2.24) upon taking the limit .
2.8.1 Proof of Proposition 13
is the XORSAT formula that contains the variables that get assigned during iteration and the clauses of that contain variables from only. Also recall that signifies the graph representation of this XORSAT formula. Unless , the graph is connected.
Lemma 16.
Fix and let . With probability the graph satisfies
The proof of Lemma 16 can be found in the full version. Thus, with probability the graph contains at most one cycle. While it is easy to check that no conflict occurs in iteration if is acyclic, in the case that contains a single cycle there is a chance of a conflict. The following definition describes the type of cycle that poses an obstacle.
Definition 17.
For a XORSAT formula we call a sequence of variables and clauses a toxic cycle of length if
- TOX1
-
contains the variables only, and
- TOX2
-
the total number of negations in is odd iff is even.
Lemma 18.
-
(i)
If contains a toxic cycle, then a conflict occurs in iteration .
-
(ii)
If contains no toxic cycle and , then no conflict occurs in iteration .
Proof.
Towards (i) we show that is not satisfiable if there is a toxic cycle ; then will, of course, run into a contradiction. To see that is unsatisfiable, we transform each of the clauses into a linear equation over . Here equals iff contains an even number of negations. Adding these equations up yields in . This condition is violated if is toxic.
Let us move on to (ii). Assume for contradiction that there exists a formula without a toxic cycle such that and such that given , may run into a conflict. Consider such a formula that minimises . Since succeeds on acyclic , we have . Thus, contains a single cycle . Apart from the cycle, contains (possibly empty) acyclic formulas attached to and attached to . The formulas are mutually disjoint and do not contain unit clauses.
We claim that are empty because is minimum. This is because given any truth assignment of , will find a satisfying assignment of the acyclic formulas .
Further, assume that one of the formulas is non-empty; say, is non-empty. If the start variable that assigns were to belong to , then , containing and , would not shrink to a unit clause, and thus would not assign values to these variables. Hence, starts by assigning a truth value to one of the variables ; say, starts with . We claim that then does not run into a conflict. Indeed, the clauses may force to assign truth values to , but no conflict can ensue because will ultimately satisfy by assigning appropriate truth values to the variables of .
Thus, we may finally assume that all of are empty. In other words, consists of the cycle only. Since is not toxic, TOX2 does not occur. Consequently, will construct an assignment that satisfies all clauses . This final contradiction implies (ii).
Corollary 19.
Fix and let . Then
Proof.
Thus, we are left to calculate the probability that contains a toxic cycle. To this end, we estimate the number of toxic cycles in the “big” formula . Let be the number of toxic cycles of length in .
Lemma 20.
Fix and let .
-
(i)
For any fixed , with probability we have
-
(ii)
For any , with probability we have
The proof of Lemma 20 is provided in the full version.
Proof of Proposition 13.
In light of Corollary 19 we just need to calculate the probability that contains a toxic cycle. Clearly, if during iteration encounters a variable of that lies on a toxic cycle, will proceed to add the entire toxic cycle to (and run into a contradiction). Furthermore, Lemma 20 shows that with probability given the probability that a random variable of belongs to a toxic cycle comes to
(2.25) |
We now use (2.25) to calculate the desired probability of encountering a toxic cycle. To this end we notice that the -st iteration of corresponds to a branching process with expected offspring , unless the root variable has already been assigned. With probability the conditional probability of this latter event equals . Further, given that the root variable has not been assigned previously, the expected progeny of the branching process, i.e., the expected number of variables in , equals . Since with probability given there remain unassigned variables in total, (2.25) implies that with probability ,
as claimed.
3 Discussion
The thrust of the present work is to verify the predictions from [24] on the algorithm and the decimation process rigorously. Concerning the decimation process, the main gap in the deliberations of Ricci-Tersenghi and Semerjian [24] that we needed to plug is the proof of Proposition 8 on the actual number of null variables in the decimation process. The proof of Proposition 8, in turn, hinges on the formula for the nullity from Proposition 6, whereas Ricci-Tersenghi and Semerjian state the (as it turns out, correct) formulas for the nullity and the number of null variables based on purely heuristic arguments.
Regarding the analysis of the algorithm, Ricci-Tersenghi and Semerjian state that they rely on the heuristic techniques from the insightful article [10] to predict the formula (1.7), but do not provide any further details; the article [10] principally employs heuristic arguments involving generating functions. By contrast, the method that we use to prove (1.7) is a bit more similar to that of Frieze and Suen [12] for the analysis of a variant of the unit clause algorithm on random -SAT instances, for which they also obtain the asymptotic success probability. Yet by comparison to the argument of Frieze and Suen, we pursue a more combinatorially explicit approach that demonstrates that certain small sub-formulas that we call “toxic cycles” are responsible for the failure of . Specifically, the proof of (1.7) combines the method of differential equations with Poissonisation. Finally, the proof of Theorem 1 (ii) is an easy afterthought of the analysis of the decimation process.
Yung’s work [25] on random -XORSAT is motivated by the “overlap gap paradigm” [13], the basic idea behind which is to show that a peculiar clustered geometry of the set of solutions is an obstacle to certain types of algorithms. Specifically, Yung only considers the Unit Clause Propagation algorithm and (a truncated version of) . Following the path beaten in [19], Yung performs moment computations to establish the overlap gap property. However, moment computations (also called “annealed computations” in physics jargon) only provide one-sided bounds. Yung’s results require spurious lower bounds on the clause length ( for Unit Clause and for ). By contrast, the present proof strategy pivots on the number of null variables rather than overlaps, and Proposition 8 provides the precise “quenched” count of null variables. A further improvement over [25] is that the present analysis pinpoints the precise threshold up to which (as well as Unit Clause) succeeds for any . Specifically, Yung proves that these algorithms fail for , while Theorem 1 shows that failure occurs already for with . Conversely, Theorem 1 shows that the algorithms succeed with a non-vanishing probability for . Thus, Theorem 1 identifies the correct threshold for the success of , as well as the correct combinatorial phenomenon that determines this threshold, namely the onset of reconstruction in the decimation process (Theorems 2 and 3).
The algorithm as detailed in Section 2.2 applies to a wide variety of problems beyond random -XORSAT. Of course, the single most prominent example is random -SAT. Lacking the symmetries of XORSAT, random -SAT does not allow for the simplification to discrete messages; in particular, the BP messages are not generally half-integral. In effect, BP and WP are no longer equivalent. In addition to random -XORSAT, the article [24] also provides a heuristic study of on random -SAT. But once again due to the lack of half-integrality, the formulas for the phase transitions no longer come as elegant finite-dimensional expressions. Instead, they now come as infinite-dimensional variational problems. Furthermore, the absence of half-integrality also entails that the present proof strategy does not extend to -SAT.
The lack of inherent symmetry in random -SAT can partly be compensated by assuming that the clause length is sufficiently large (viz. larger than some usually unspecified constant ). Under this assumption the random -SAT version of both the decimation process and the algorithm have been analysed rigorously [7, 9]. The results are in qualitative agreement with the predictions from [24]. In particular, the algorithm provably fails to find satisfying assignments on random -SAT instances even below the threshold where the set of satisfying assignments shatters into well-separated clusters [1, 16]. Furthermore, on random -SAT a more sophisticated message passing algorithm called Survey Propagation Guided Decimation has been suggested [20, 24]. While on random XORSAT Survey Propagation and Belief Propagation are equivalent, the two algorithms are substantially different on random -SAT. One might therefore hope that Survey Propagation Guided Decimation outperforms on random -SAT and finds satisfying assignments up to the aforementioned shattering transition. A negative result to the effect that Survey Propagation Guided Decimation fails asymptotically beyond the shattering transition point for large enough exists [14]. Yet a complete analysis of Belief/Survey Propagation Guided Decimation on random -SAT for any in analogy to the results obtained here for random -XORSAT remains an outstanding challenge.
Finally, returning to random -XORSAT, a question for future work may be to investigate the performance of various types of algorithms such as greedy, message passing or local search that aim to find an assignment that violates the least possible number of clauses. Of course, this question is relevant even for . A first step based on the heuristic “dynamical cavity method” was recently undertaken by Maier, Behrens and Zdeborová [17].
References
- [1] Dimitris Achlioptas and Amin Coja-Oghlan. Algorithmic barriers from phase transitions. In Proc. 49th FOCS, pages 793–802, 2008. doi:10.1109/FOCS.2008.11.
- [2] Dimitris Achlioptas and Micheal Molloy. The solution space geometry of random linear equations. Random Structures & Algorithms, 46:197–231, 2015. doi:10.1002/RSA.20494.
- [3] Peter Ayre, Amin Coja-Oghlan, Pu Gao, and Noëla Müller. The satisfiability threshold for random linear equations. Combinatorica, 40:179–235, 2020. doi:10.1007/S00493-019-3897-3.
- [4] Béla Bollobás. Random Graphs. Cambridge University Press, 2001.
- [5] Alfredo Braunstein, Marc Mézard, and Riccardo Zecchina. Survey propagation: An algorithm for satisfiability. Random Structures & Algorithms, 27:201–226, 2005. doi:10.1002/RSA.20057.
- [6] Amin Coja-Oghlan. A better algorithm for random k-sat. SIAM Journal on Computing, 39:2823–2864, 2010. doi:10.1137/09076516X.
- [7] Amin Coja-Oghlan. Belief propagation guided decimation fails on random formulas. Journal of the ACM, 63(49), 2017. doi:10.1145/3005398.
- [8] Amin Coja-Oghlan, Alperen A. Ergür, Pu Gao, Samuel Hetterich, and Maurice Rolvien. The rank of sparse random matrices. Random Structures & Algorithms, 62:68–130, 2023. doi:10.1002/RSA.21085.
- [9] Amin Coja-Oghlan and Angelica Pachon-Pinzon. The decimation process in random k-sat. SIAM Journal on Discrete Mathematics, 26:1471–1509, 2012. doi:10.1137/110842867.
- [10] Christophe Deroulers and Rémi Monasson. Criticality and universality in the unit-propagation search rule. European Physical Journal B., 49:339–369, 2006.
- [11] Olivier Dubois and Jacques Mandler. The 3-xorsat threshold. In Proc. 43rd FOCS, pages 769–778, 2002. doi:10.1109/SFCS.2002.1182002.
- [12] Alan Frieze and Stephen Suen. Analysis of two simple heuristics on a random instance of k-sat. Journal of Algorithms, 20:312–355, 1996. doi:10.1006/JAGM.1996.0016.
- [13] David Gamarnik. The overlap gap property: a topological barrier to optimizing over random structures. Proceeding of the National Academy of Sciences, 118, 2021.
- [14] Samuel Hetterich. Analysing survey propagation guided decimation on random formulas. In Proc. 43rd ICALP, number 65, 2016.
- [15] Morteza Ibrahimi, Yash Kanoria, Matt Kraning, and Andrea Montanari. The set of solutions of random xorsat formulae. Annals of Applied Probability, 25:2743–2808, 2015.
- [16] Florent Krzakala, Andrea Montanari, Federico Ricci-Tersenghi, Guilhem Semerjian, and Lenka Zdeborová. Gibbs states and the set of solutions of random constraint satisfaction problems. Proceeding of the National Academy of Sciences, 104:10318–10323, 2007. doi:10.1073/PNAS.0703685104.
- [17] Aude Maier, Freya Behrens, and Lenka Zdeborová. Dynamical cavity method for hypergraphs and its application to quenches in the k-xorsat problem, 2024. arXiv:2412.14794.
- [18] Marc Mézard and Andrea Montanari. Information, Physics and Computation. Oxford University Press, 2009.
- [19] Marc Mézard, Thierry Mora, and Riccardo Zecchina. Clustering of solutions in the random satisfiability problem. Physical Review Letters, 94, 2005.
- [20] Marc Mézard, Giorgio Parisi, and Riccardo Zecchina. Analytic and algorithmic solution of random satisfiability problems. Science, 297:812–815, 2002.
- [21] Marc Mézard, Federico Ricci-Tersenghi, and Riccardo Zecchina. Two solutions to diluted p-spin models and xorsat problems. Journal of Statistical Physics, 111:505–533, 2003.
- [22] Michael Molloy. Cores in random hypergraphs and boolean formulas. Random Structures & Algorithms, 27:124–135, 2005. doi:10.1002/RSA.20061.
- [23] Boris Pittel and Gregory B. Sorkin. The satisfiability threshold for k-xorsat. Combinatorics, Probability and Computing, 25:236–268, 2016. doi:10.1017/S0963548315000097.
- [24] Federico Ricci-Tersenghi and Guilhem Semerjian. On the cavity method for decimated random constraint satisfaction problems and the analysis of belief propagation guided decimation algorithms. Journal of Statistical Mechanics, 2009.
- [25] Kingsley Yung. Limits of sequential local algorithms on the random k-xorsat problem. In Proc. 51st ICALP, number 123, 2024.